Quantcast
Channel: Active questions tagged config - Stack Overflow
Viewing all 5049 articles
Browse latest View live

Use read -p command to save response and overwrite a config file in script [duplicate]

$
0
0

This question already has an answer here:

So I have a configuration file that I want to change after receiving a prompted response from the user.

read -p 'Your RPC Username: ' RPC_USER
sleep 1s
echo -e "${YELLOW}"
echo "$RPC_USER"
echo "----------------"
echo "Is this correct?"
echo -e "${RED}"
select yn in "Yes""No"; do
    case $yn in
        Yes ) break;;
        No ) read -p 'New RPC Username: ' RPC_USER
            echo "$RPC_USER"
    esac
done

After this I want to change a file configuration say: test.conf. So I try:

sed -i '1i RPC_USERNAME="$RPC_USER"' ~/test.conf

But only thing I get is:

RPC_USERNAME="$RPC_USER"

Again this is to be able to paste these user input variables directly into the configuration file for them during the script.

Any help is greatly appreciated!


Could not load file or assembly 'Oracle.ManagedDataAccess'

$
0
0

we are developing a windows service which connect to oracle database. We use Oracle ManagedDataAccess from Nuget package. When running the windows service we are receving below error. Tried to get details from stackoverflows and nothing is able to resolve the problem. Our config file is as below

Could not load file or assembly 'Oracle.ManagedDataAccess, Version=4.122.19.1, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies

Config

<?xml version="1.0"?>
<configuration>
  <configSections>
    <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
      <section name="eClaimsService.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false"/>
      <section name="oracle.manageddataaccess.client"
      type="OracleInternal.Common.ODPMSectionHandler, Oracle.ManagedDataAccess, Version=4.122.19.1, Culture=neutral, PublicKeyToken=89b483f429c47342"/>
    </sectionGroup>
  </configSections>
   <system.data>
    <DbProviderFactories>
      <remove invariant="Oracle.ManagedDataAccess.Client"/>
      <add name="ODP.NET, Managed Driver" invariant="Oracle.ManagedDataAccess.Client" description="Oracle Data Provider for .NET, Managed Driver"
        type="Oracle.ManagedDataAccess.Client.OracleClientFactory, Oracle.ManagedDataAccess, Version=4.122.19.1, Culture=neutral, PublicKeyToken=89b483f429c47342"/>
    </DbProviderFactories>
  </system.data>
  <system.web>
    <httpRuntime executionTimeout="3600" requestValidationMode="2.0" maxRequestLength="10240"/>
    <sessionState mode="InProc" timeout="60"/>
    <pages validateRequest="false" />
  </system.web>
</configuration>

How can I configure an Anaconda environment to pull specific packages from separate custom channels

$
0
0

I can configure an Anaconda environment with a yaml-file to be pulling packages from several named channels, eg.

name: test1
channels:
  - anaconda
  - conda-forge
  - plotly
  - pytorch
dependencies:
  - python=3.7
  - pytorch::pytorch
  - conda-forge::nodejs>=12.8.0
  - plotly::plotly-orca>=1.2.1
  - pip:
    - objgraph
    - setproctitle

This works fine.

However, we are using mirrored channels with custom urls. We can easily pull packages from these custom channels, but cannot use channel-name::package-name syntax in dependencies because now the channels are not named. This does not work, because the custom channels does not have names:

name: test1
channels:
  - http://xyz.local:8080/conda.anaconda.org/conda-forge
  - http://xyz.local:8080/repo.anaconda.com/pkgs/main
  - http://xyz.local:8080/repo.anaconda.com/pkgs/msys2
  - http://xyz.local:8080/conda.anaconda.org/plotly
  - http://xyz.local:8080/conda.anaconda.org/pytorch
  - nodefaults
dependencies:
  - python=3.7
  - pytorch::pytorch
  - conda-forge::nodejs>=12.8.0
  - plotly::plotly-orca>=1.2.1
  - pip:
    - objgraph
    - setproctitle

Is there a syntax for naming custom channels?

Regards Niels Jespersen

Problem with configuration of Dependency Injection in net framework 4.6.1 parameterless constructor

$
0
0

I am creating new WCF service, on base of some old code. I think that I have done everything the same but during creating class using DI I get error:

No parameterless constructor defined for this object.

Below is my new code that is done on basics of working one:

public SChangeClaimHandlerStatusDTO ChangeClaimHandler(SChangeClaimHandlerMessageDTO message)
{
    (...)
    var status =
        ServiceProvider<VIG.ZEVIG.BusinessLayer.BusinessLogic.Order.S.SIntegrationService>
            .Service.ChangeClaimHandler(message);
    //line above is causing error
    (...)
    return status;
}

And below is called class:

public class SIntegrationService : ServiceBase 
{
    private const int SImageExpirationTimeInMinutes = 60;

    public SIntegrationService(
        IDocumentService documentService,
        IAttachmentService attachmentService,
        IUserService userService,
        ITextEncryptor textEncryptor, (...) )
    {
        RegisterService<IDocumentService>(documentService);
        RegisterService<IAttachmentService>(attachmentService);
        RegisterService<IUserService>(userService);
        RegisterService<ITextEncryptor>(textEncryptor);
        (...)
    }
}
public class ServiceBase : IBusinessService
{
    public ServiceBase();
    public ServiceBase(Dictionary<Type, object> dependencies);
    protected IAppContext ApplicationContext { get; }
    protected void RegisterService<T>(T dependency) where T : class;
}

And ServiceProvider class looking like this:

public class ServiceProvider<T> where T : class
{
    public ServiceProvider();

    public static T Service { get; }
}

class SIntegrationService has only one constructor (no parameterless constructor). Both programs have in references SimpleInjector.

I`m rather new to DI, so maybe I am missing something obvious, maybe i should put something into config file to let know that there will be DI included? I know that that should work in this way because I am staring on a code in other project that is workink fine.

In other project config I also have this line:

<Services
  AutoWire="false"
  IocContainer="V.Common.Ioc.SimpleInjector.SimpleInjectorContainerAdapter, V.Common.Ioc.SimpleInjector"/>

maybe thats the solution?

Below is error details:

System.MissingMethodException HResult=0x80131513
Message=No parameterless constructor defined for this object.
Source "mscorlib" string
StackTrace " at System.RuntimeTypeHandle.CreateInstance(RuntimeType >type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, >RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)\r\n
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)\r\n at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)\r\n at System.Activator.CreateInstance(Type type, Boolean nonPublic)\r\n at System.Activator.CreateInstance(Type type)\r\n
at V.Common.Services.ServiceProvider`1.get_Service()\r\n at BusinessServices.SIntegration.SIntegrationService.ChangeClaimHandler(SChangeClaimHandlerMessageDTO message) in D:\TFS1\Branches\Release 1.2.1 Env01\ZV\BusinessServices.SIntegration\SOrderService.svc.cs:line 31\r\n at SyncInvokeChangeClaimHandler(Object , Object[] , Object[] )\r\n at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs)\r\n at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)" string

Laravel package reading package config file and not published config file

$
0
0

I have created a Laravel package, uploaded it to packagist and managed to install it using composer require.

I have now hit a problem and I don't know how to fix it and doing a search does not help.

I have a config file which publishes a default config file to the config directory. I have made changes to the published file and now I want my package to use this config file but it's using the config file within the package and not the newly updated published file. This is my service provider within the vendor src folder

namespace Clystnet\Vtiger;

use Illuminate\Support\ServiceProvider;

class VtigerServiceProvider extends ServiceProvider
{
    /**
     * Bootstrap the application services.
     *
     * @return void
     */
    public function boot()
    {
        $this->publishes([
            __DIR__ . '/Config/config.php' => config_path('vtiger.php'),
        ], 'vtiger');

        // use the vendor configuration file as fallback
        $this->mergeConfigFrom(
            __DIR__ . '/Config/config.php', 'vtiger'
        );
    }

    /**
     * Register the application services.
     *
     * @return void
     */
    public function register()
    {
        $this->app->bind('clystnet-vtiger', function () {
            return new Vtiger();
        });

        config([
            'config/vtiger.php',
        ]);
    }
}

This is my main package class

<?php 
namespace Clystnet\Vtiger;

use Storage;
use Illuminate\Support\Facades\Config;

class Vtiger
{
    protected $url;
    protected $username;
    protected $accesskey;

    public function __construct() {
        // set the API url and username
        $this->url = Config::get('vtiger.url');
        $this->username = Config::get('vtiger.username');
        $this->accesskey = Config::get('vtiger.accesskey');
    }
   ...

Within my class I'm doing a var_dump($this->url) and it's not reading the correct config file.

How do I set it to use the right one?

UPDATE

This is my custom config file and the one that the package is reading

return [
    'url' => 'path/to/vtiger/webservice',
    'username' => '',
    'accesskey' => '',
];

Qt qmake doesn't follow source folder structure when using CONFIG += object_parallel_to_source

$
0
0

When building a project in Qt Creator, I want the (shadowed) build directory to have the same folder structure as my source folder. This can be done with the (undocumented) statement in the .pro file :

CONFIG += object_parallel_to_source

The "standard" *.cpp files are compiled and the corresponding *.o files are placed in the correct build folder. But ... the moc and rcc place all moc_*.cpp and qrc_*.cpp files in the top level of the build directory, not in the corresponding directory. What is even worse : qmake creates the complete directory path again within the specified build directory!!

An example will clarify it : say, your source (summarized) is organized like this :

/usr/share/myapps/project
  |_ project.pro     <= contains  'CONFIG += object_parallel_to_source'
  |_ mainfolder
  |    |_ main.cpp
  |_ subfolder1
  |    |_ class1_no_QOBJECT.cpp   
  |    |_ class2_with_QOBJECT.cpp
  |_ subfolder2
       |_ class3_no_QOBJECT.cpp
       |_ resources.qrc

and the build directory is specified in Qt Creator

/usr/share/myapps/build

Running qmake and make results in the following build folder structure

 /usr/share/myapps/build
      |_ mainfolder
      |    |_ main.o
      |_ subfolder1
      |    |_ class1_no_QOBJECT.o
      |_ subfolder2
      |    |_ class3_no_QOBJECT.o
      |_ moc_class2_with_QOBJECT.cpp    <- should be in subfolder1
      |_ qrc_resources.cpp              <- should be in subfolder2
      |_ usr
           |_ share
               |_ myapps
                    |_build
                       |_ moc_class2_with_QOBJECT.o       <- should be in subfolder1
                       |_ qrc_resources.o                 <- should be in subfolder2

Is there a way to prevent this doubling of the folder structure ?

I'm using Qt 5.9.1 (because this is installed by the hardware supplier - embbeded Linux) on Ubuntu 18.04

sketchtool CLI with fish shell

$
0
0

I tried for a while to get the fish shell equivalent for the sketch cli initialization commands. Can anyone help?

For fish it the first line seems to work if you remove the '$' character. Second line for the argument passing I've tried removing the $, the quotes, & several different formats. Couldn't find documentation for argument passing initialization in fish.

#!/bin/sh

SKETCH=$(mdfind kMDItemCFBundleIdentifier == 'com.bohemiancoding.sketch3' | head -n 1)

# pass on all given arguments
"$SKETCH/Contents/Resources/sketchtool/bin/sketchtool""$@"

reference: https://developer.sketch.com/cli/

When does pyramid.paster's config.add_settings({...}) require more action than INI settings?

$
0
0

When I created a default pyramid app from a cookie cutter, it resulted in an INI file with sections like this:

[app:myapp]
use = egg:informatics#physicals
pyramid.reload_templates = true
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.default_locale_name = en
pyramid.includes = pyramid_debugtoolbar

Now I'm experimenting with adding these same settings in python code instead, using the Configurator object in __init__.py, and I find that the following appears to work the same:

config.include('pyramid_debugtoolbar')
config.add_settings({
    'pyramid.reload_templates'      : 'true',
    'pyramid.debug_authorization'   : 'false',
    'pyramid.debug_notfound'        : 'false',
    'pyramid.debug_routematch'      : 'false',
    'pyramid.default_locale_name'   : 'en',
    'pyramid.includes'              : 'pyramid_debugtoolbar',
    })

But when applying these setting in python, the first line config.include('pyramid_debugtoolbar') is required or it doesn't work. Yet, in the INI version, it's sufficient to set pyramid.includes = pyramid_debugtoolbar.

My Questions:

  1. Why doesn't it automatically include it in the python-version like in the INI version?
  2. How do I know if other settings require additional action like this?
  3. Is the 'pyramid.includes':'pyramid_debugtoolbar' entry necessary since I'm already including with the config.include('pyramid_debugtoolbar') method?

run two instance of same app with different config file at same machine is it possible?

$
0
0

I have a node.js app with a mongodb database so I want to have two instance of an app running at the same time . one on production and one on development. so is it possible to set different NODE_ENV on different instance of an app.

Corrupt Iptables Firewalld CentOS 7 when adding unknown option "--zone=home" in firewall-cmd script

$
0
0

https://www.digitalocean.com/community/tutorials/how-to-set-up-and-configure-an-openvpn-server-on-centos-7

I was following this tutorial for installing an OpenVPN server on my Centos 7 server VPS

Down at this part towards the end I was adding a script , I put "--zone=home" at the end of the 2nd command, which was a mistake

Next, forward routing to your OpenVPN subnet. You can do this by first creating a variable (SHARK in our example) which will represent the primary network interface used by your server, and then using that variable to permanently add the routing rule:

>     SHARK=$(ip route get 8.8.8.8 | awk 'NR==1 {print $(NF-2)}')
>     sudo firewall-cmd --permanent --direct --passthrough ipv4 -t nat -A POSTROUTING -s 10.8.0.0/24 -o $SHARK -j MASQUERADE

Be sure to implement these changes to your firewall rules by reloading firewalld:

sudo firewall-cmd --reload

then when i reload firewall-cmd I get error

Error: COMMAND_FAILED: Direct: '/usr/sbin/iptables-restore -w -n' failed: iptables-restore v1.4.21: unknown option "--zone=home" Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information.

I tried reinstalling firewalld with no luck, the unknown option still seems to be there. How do I get that unknown option out?

what is the best config for xmr-stak to mine monero?

$
0
0

My tech spec is dual intel gold 6140 (36 cores 2.3ghz) 96gb ram, 2 800gb ssd (raid) and 2 nvidia v100 32gb cards. (i don;t have access to bios to overclock)

I was wondering what would be the best config and setup to get the most hashes? currently I'm getting 3000-4000 h/s thats both gpu and cpu combined. I allowed the large pages and increase page size to 64gb (not sure if that was necessery or not) also I installed latest cuda.

this is my cpu config not sure if i'm getting the max amount of threads out of it, also i get some error that it can't go to 86 only 63, not sure what it means.

"cpu_threads_conf" :
[
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 0 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 2 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 4 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 6 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 8 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 10 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 12 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 14 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 16 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 18 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 20 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 22 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 24 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 26 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 28 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 30 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 64 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 66 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 68 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 70 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 72 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 74 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 76 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 78 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 80 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 82 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 84 },
    { "low_power_mode" : false, "no_prefetch" : true, "asm" : "auto", "affine_to_cpu" : 86 },

],

this is my gpu config (default)

"gpu_threads_conf" :
[
  // gpu: Tesla V100-PCIE-32GB architecture: 70
  //      memory: 32127/32642 MiB
  //      smx: 80
  { "index" : 0,
    "threads" : 4, "blocks" : 640,
    "bfactor" : 6, "bsleep" :  25,
    "affine_to_cpu" : true, "sync_mode" : 1,
    "mem_mode" : 1,
  },
  // gpu: Tesla V100-PCIE-32GB architecture: 70
  //      memory: 32127/32642 MiB
  //      smx: 80
  { "index" : 1,
    "threads" : 4, "blocks" : 640,
    "bfactor" : 6, "bsleep" :  25,
    "affine_to_cpu" : true, "sync_mode" : 1,
    "mem_mode" : 1,
  },

],

How to tackle big .ini files using Python's configparser?

$
0
0

Didn't know where to ask this question. I have the following class architecture:

import os
from configparser import ConfigParser, ExtendedInterpolation


class MyFancyObject:
    def __init__(self, config_file_path):
        if config_file_path is None:
            self.__config_file_path = os.path.join(os.path.dirname(__file__), 'default_config.ini')
        else:
            self.__config_file_path = config_file_path

        self.__config = ConfigParser(interpolation=ExtendedInterpolation())

        self.__variable_a = self.__config.get('my_data', 'a_variable')
        self.__variable_b = self.__config.get('my_data', 'b_variable')
        self.__variable_c = self.__config.get('my_data', 'c_variable')
        self.__variable_d = self.__config.get('my_data', 'd_variable')
...

This object takes upon creation an argument specifying a config file which will be loaded by ConfigParser. If the config file path is not specified, then it looks for one in the same directory as the class file. The config file looks something like this:

[my_data]
a_variable = 1
b_variable = hello
c_variable = world
d_variable = 5.4231

Of course, this is just an example to wrap your head around the problem. My issue is that using this architecture, if the config file gets bigger and bigger then there is a lot of code that gets put into place to set the values of the respective class variables because for each entry in the config .ini file you have to get it from the configparser object and assign it to the class variable.

Is there a nicer, cleaner way to get all of the config file data into their respective class variable?

How to use config.add_settings({'pyramid.includes': ...}) from included callable

$
0
0

When I created a default pyramid app from a cookie cutter, it resulted in an INI file with sections like this:

[app:myapp]
use = egg:myproject#myapp
pyramid.reload_templates = true
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.default_locale_name = en
pyramid.includes = pyramid_debugtoolbar

Now I'm experimenting with adding these same settings in python code instead, using the Configurator object in __init__.py, and I find that the following appears to work the same:

config.include('pyramid_debugtoolbar')
config.add_settings({
    'pyramid.reload_templates'      : 'true',
    'pyramid.debug_authorization'   : 'false',
    'pyramid.debug_notfound'        : 'false',
    'pyramid.debug_routematch'      : 'false',
    'pyramid.default_locale_name'   : 'en',
    'pyramid.includes'              : 'pyramid_debugtoolbar',
    })

But when applying these setting in python, the first line config.include('pyramid_debugtoolbar') is required or it doesn't work. Yet, in the INI version, it's sufficient to set pyramid.includes = pyramid_debugtoolbar.

After Further Digging

Looking higher up the stack in my code, I found that the setting does work this way...

def main(global_config, **settings):
    """ This function returns a Pyramid WSGI application."""
    settings.update({'pyramid.includes':'pyramid_debugtoolbar'}) # SETTING HERE WORKS!
    with Configurator(settings=settings) as config:
        config.include(common_config)
        config.include('.routes')
        config.scan()

    return config.make_wsgi_app()

But NOT this way...

def main(global_config, **settings):
    """ This function returns a Pyramid WSGI application."""
    with Configurator(settings=settings) as config:
        config.add_settings({'pyramid.includes':'pyramid_debugtoolbar'}) # NO EFFECT!
        config.include(common_config)
        config.include('.routes')
        config.scan()

    return config.make_wsgi_app()

In the documentation for pyramid.config, I found this warning that I suspect is what I'm dealing with:

A configuration callable should be a callable that accepts a single argument named config, which will be an instance of a Configurator. However, be warned that it will not be the same configurator instance on which you call this method. The code which runs as a result of calling the callable should invoke methods on the configurator passed to it which add configuration state. The return value of a callable will be ignored.

In an effort to guess at the solution, I tried wrapping my config.add_settings(...) with various combinations of config.commit() and config.begin()/config.end(), and none of those worked either.

My Question:

How do I use config.add_settings(...) to set pyramid.includes? I want to do this in a common_config() callable that is included by multiple pyramid apps.

On kubernetes helm how to replace a pod with new config values

$
0
0

I am using helm charts to deploy pods with a "ConfigMap" managing the configurations.

I edit ConfigMap directly to make changes to configuration files and then delete pods using kubectl delete, for the new configuration to take effect.

Is there any easy way using helm to replace a running pod with the new configuration without executing "kubectl delete" command

Trailing command line argument in "psql -c" line: what is it?

$
0
0

I'm looking at a YAML config file for launching a cloud server. I want to modify the file to use an RDS database instance rather than the PostgreSQL on the EC2 instance. I can't quite make sense of this trailing aggregate" at the end of the last three lines. I've done some googling around the psql -c command but can't seem to find an explanation or example of it used elsewhere. The reason it's a bit confusing is that the sample config uses the same string for database, user, password, and schema.

  - su postgres -c "psql -c \"CREATE ROLE aggregate WITH LOGIN PASSWORD 'aggregate'\""
  - su postgres -c "psql -c \"CREATE DATABASE aggregate WITH OWNER aggregate\""
  - su postgres -c "psql -c \"GRANT ALL PRIVILEGES ON DATABASE aggregate TO aggregate\""
  - su postgres -c "psql -c \"CREATE SCHEMA aggregate\" aggregate"
  - su postgres -c "psql -c \"ALTER SCHEMA aggregate OWNER TO aggregate\" aggregate"
  - su postgres -c "psql -c \"GRANT ALL PRIVILEGES ON SCHEMA aggregate TO aggregate\" aggregate"

Is the trailing aggregate" referring to the database inside which the schema being created/altered resides?

EDIT: The endpoint values are elsewhere in the config file, of course. The above commands are just what I need to run against the RDS database before launching the EC2 instance, and for the sake of clarity and security, I'd like to change the database, schema and username values, rather than just having everything be aggregate.


what does "git config core.worktree" mean?

$
0
0

I have seen this line in a script I am using :

git config core.worktree ..

I'm not sure what does git worktree do, but I definitively do not understand why to set it to ..

Any clue ? Thanks

Publishing WCF at IIS as https getting error

$
0
0

We created a WCF service and able to consume locally. When published it at IIS and consume it as https its giving below error

Could not find a base address that matches scheme http for the endpoint with binding WSHttpBinding. Registered base address schemes are [https].

Config file

<?xml version="1.0"?> <configuration>

  <appSettings>
    <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />   </appSettings>   <system.web>
    <compilation debug="true" targetFramework="4.5" />
    <httpRuntime targetFramework="4.5"/>   </system.web>   <system.serviceModel>
    <services>
      <service name="WCF_Portal_Service.Portal">
        <endpoint address="" binding="wsHttpBinding" bindingConfiguration=""
          contract="WCF_Portal_Service.IPortal" />
      </service>
    </services>
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <!-- To avoid disclosing metadata information, set the values below to false before deployment -->
          <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/>
          <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
          <serviceDebug includeExceptionDetailInFaults="false"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
    <protocolMapping>
        <add binding="basicHttpsBinding" scheme="https" />
    </protocolMapping>    
    <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" />   </system.serviceModel>   <system.webServer>
    <modules runAllManagedModulesForAllRequests="true"/>
    <!--
        To browse web app root directory during debugging, set the value below to true.
        Set to false before deployment to avoid disclosing web app folder information.
      -->
    <directoryBrowse enabled="true"/>   </system.webServer>

</configuration>

Run background jobs with elastic beanstalk

$
0
0

I am trying to start a background job in elastic beanstalk, the background job has an infinite loop so it never returns a response and so I receive this error:" Some instances have not responded to commands.Responses were not received from [i-ba5fb2f7]."

I am starting the background job in the elastic beanstalk .config file like this: 06_start_workers: command: "./workers.py &"

Is there any way to do this? I don't want elastic beanstalk to wait for a return value of that process ..

How to access config data in .js in Laravel (Not in blade)

$
0
0

I can access the config data in .blade using

{{ config('config.variable') }}

However, I have no idea how to access the config data in .js file. Can anyone give me some suggestion?

PHP: When are settings in .user.ini get applied when using php-fpm?

$
0
0

I'm using Apache with php-fpm on a RHEL8 system. php-fpm was installed from remi repo in version 7.2.

I've added a file .user.ini in a web accessible folder to set memory_limit=256M (default of /etc/php.ini is 128M). It seems to work. But I detected that the value does 'not always' seems to be applied immediately if I change it. I checked that by repeatedly calling a page that outputs phpinfo();. Sometimes the value is changed, sometimes not.

I guess that it is php-fpm with its process pool (if I did understand that correctly). New processes will have the new value. Old values the old one. And if a page gets called, you never know which process actually responds.

I think reloading php-fmp (systemctl reload php-fpm.service) resets those processes and each has the updated value.

Can anyone explain how it works exactly? What is important to know, about .user.ini and php-fpm? Can it be that some request will use the old value forever?

Viewing all 5049 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>