Um blog sobre nada

Um conjunto de inutilidades que podem vir a ser úteis

Installing awscli on Cygwin

Posted by Diego em Junho 21, 2017

The normal way of installing the aws-cli is simply by running pip install awscli
However, If you do that from cygwin, it will install awscli in Window’s Anaconda Python installation, instead of in Cygwin’s Python (which is what we want). Then, when you run aws configure, you will get an error that the aws executable can’t be found. Like the one bellow (I have my python installed at c:\Anaconda2) :

 

can't open file '/cygdrive/c/Anaconda2/Scripts/aws': [Errno 2] No such file or directory

 

If I use the  which command to find out where python is installed, I can see it is looking at my windows installation:

 

 

 

The solution is to try the following from a cygwin shell:

wget rawgit.com/transcode-open/apt-cyg/master/apt-cyg
install apt-cyg /bin
apt-cyg install python
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py

 

At this point you can verify that python is installed in cygwin

 and then run:

pip install awscli

 


Anúncios

Posted in AWS, I.T., Python | Leave a Comment »

Using Spyder with the interpreter set by conda environment

Posted by Diego em Maio 31, 2017

 

Using anaconda, you can create an environment named "python3env" for example, and install the latest version of Python 3 as follows:

 

conda create -n python3env python=3 anaconda

activate python3env

 

After activating the environment, by just typing spyder, you will launch it using the 3.x interpreter:

 

clip_image001

clip_image003

 

More info: https://www.continuum.io/blog/developer-blog/python-3-support-anaconda

Posted in Python | Leave a Comment »

Easiest way to install xgboost on windows (download binaries – no need to compile )

Posted by Diego em Abril 5, 2017

1) (I am assuming both git and Anaconda are already installed).

2) Choose a place to have the installer files and clone the git repo:

 

git clone https://github.com/dmlc/xgboost.git xgboost_install

clip_image002

 

 

3) Download the libxgboost.dll file from here and copy it to the xgboost folder on: <install_dir>\python-package\xgboost\

 

clip_image004

 

 

4) Navigate to the python_package folder and run:

python setup.py install

clip_image006

That should work fine.

If, however, you get the error bellow – like I did – when trying to import the library:

 

 

WindowsError: [Error 126] The specified module could not be found

 

here’s what I recommend:

After some debugging I found out the problem was on the from .core import DMatrix, Booster command, more specifically, on the “_load_lib()” function inside Core trying to run this line:

 

lib = ctypes.cdll.LoadLibrary(lib_path[0])

 

where lib_path[0] was precisely the file path for the libxgboost.dll I had just copied to the xgboost folder.

Since I was sure the file existed, I realized that maybe the DLL depended on other DLLs that could not be found. To check that, I downloaded dependency walker from this link, which showed me that the required VCOMP140.DLL was missing:

 

 

image

 

 

 

After some goggling, I discovered that the most common cause for that is that my machine did not have the C++ runtime installed, which I downloaded from here and eventually solved my problem:

image

 

Posted in Data Science, Machine Learning, Python, xgboost | Leave a Comment »

Scripting body and signature of Functions (Actian Matrix \ Redshift)

Posted by Diego em Fevereiro 19, 2017

 

Useful if you want to automatically manage permissions on the functions since you have to include the function signature on the grant\revoke statement.

 

SELECT proname, n.nspname||'.'||p.proname||'('||pg_catalog.oidvectortypes(p.proargtypes) ||')' as signature, prosrc as body
FROM pg_catalog.pg_namespace n 
JOIN pg_catalog.pg_proc p ON pronamespace = n.oid 

 

 

image

Posted in AWS Redshitft \ Actian Matrix, I.T., SQL | Leave a Comment »

How to move a VM file to another location

Posted by Diego em Novembro 25, 2016

1) Copy the "VirtualBox VMs" folder from its current location to the new location you desire;

3) Change the "Default Machine Folder" to the new location (go to File -> preferences -> General):

clip_image001

 

4) On the Virtual Box Manager, right click on your VM and click "Remove" -> “Remove only”.

clip_image002

5) Close and then reopen VM Manager

6) Go to “Machine” -> “Add” (it should default to the new folder) and re-add the VM

Posted in Uncategorized | Leave a Comment »

How to connect to PostgreSQL running on an Ubuntu VM

Posted by Diego em Setembro 19, 2016

 

FYI, PostgreSQL can be installed using:

 

                                

sudo apt-get update
sudo apt-get install postgresql postgresql-contrib

 

 

Optionally, install pgAdminIII (https://www.pgadmin.org/download/windows.php) .to test the connectivity.

 

Virtual Box creates virtual machines with the NAT network type by default. If you want to run server software inside a virtual machine, you’ll need to change its network type or forward ports through the virtual NAT.

With the NAT network type, your host operating system performs network address translation. The virtual machine shares your host computer’s IP address and won’t receive any incoming traffic. You can use bridged networking mode instead — in bridged mode, the virtual machine will appear as a separate device on your network and have its own IP address.

To change a virtual machine’s network type in VirtualBox, right-click a virtual machine and select Settings, go to “network” and change the “attached to” option to “Bridged Adapter”

 

clip_image001[4]

 

You can check your VM’s new IP by typing “ifconfig” or clicking on the top right corner icon -> “System Settings” -> “Network”:

 

clip_image003[4]

 

 

Then, navigate to Postgres’ installation folder (normally on: /etc/postgresql/9.5/main) and edit the postgresql.conf file setting it to whatever suits you (I set it to all):

 

                           

sudo vi postgresql.conf

clip_image005[4]

                      

sudo systemctl restart postgresql  #restart

 

That will make PostgreSQL to listen to all IPs, as we can see on this before\after:

clip_image007[4]

 

(That will allow you to connect locally using the server’s IP address, before this step you’d be able to connect only using “localhost”).

At last, edit the pg_hba.conf (https://www.postgresql.org/docs/9.5/static/auth-pg-hba-conf.html ) file, which controls client authentication, and add a row that allows all connections:

 

                   

sudo vi pg_hba.conf

clip_image009[4]

 

                   

sudo systemctl restart postgresql

 

 

By doing so, you should be able to access PostgreSQL from outside your VM.

Posted in PSQL | Leave a Comment »

TensorFlow working on an Ubuntu virtual Machine

Posted by Diego em Agosto 5, 2016

 

This is a quick step-by-step on how to get TensorFlow working on an Ubuntu virtual Machine. The initial steps are very high level because it not hard to find tutorials or documentation that supports them.

1) Download and install Oracle Virtual Box from https://www.virtualbox.org/wiki/Downloads

 

2) Download an Ubuntu image from http://www.ubuntu.com/download

3) Create a new VM (make sure to always run “as administrator” – you may get error otherwise):

image

4)Once the VM is up and running, install software to help development (Type CTRL + ALT + t -> shortcut to open a new terminal)

 

 

Guest Additions:

·   sudo apt-get install virtualbox-guest-utils virtualbox-guest-x11 virtualbox-guest-dkms


Sublimetext:

·         Download from: sublimetext.com

·         Run: sudo dpkg -i /home/diego/Downloads/<DownloadedFile>

·         sudo apt-get install -f


htop:

·         sudo apt-get install htop

Anaconda (Jupyter notebooks – optional if you want to run the Udacity examples):

·         https://www.continuum.io/downloads

·         bash Anaconda2-4.1.1-Linux-x86_64.sh

·         Restart the VM or type: “. .bashrc” (two dots, without the quotes) on your home directory to update the PATH

·         Type jupyter notebook to start it

TensorFlow:

If not Installed Anaconda (probably using /usr/bin/python):

 

·         Run: sudo apt-get install python-pip python-dev

·         # For Ubuntu/Linux 64-bit, CPU only, Python 2.7

·         $ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl

·         # Python 2:  $ sudo pip install –upgrade $TF_BINARY_URL

 

If Installed Anaconda (probably using /home/<user>/anaconda/….)::

·         # Python 2.7:  conda create -n tensorflow python=2.7

·         conda install -c conda-forge tensorflow

 

 

image

 

 

 

Mounting a network driver:

 

* Mount:

Go to the settings on your VM and add add folder:

image

 

On Unix run:

sudo mount -t vboxsf TensorflowVMShared /home/diego/Documents/myMountedFolder

 

where:

TensorflowVMShared is the Windows alias you created and

/home/diego/Documents/myMountedFolder is the folder on Unix

 

 

* See a list of mounts:

cat /proc/mounts

OR

df –aTh

 

* Remove the mounts:

sudo umount -f /home/diego/Documents/myMountedFolder

 

Pycharm:

Run:

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer

Download and unpack Pycharm (move it to the desired folder, like “opt”)

Run pycharm.sh from the bin subdirectory

 

Creating a launcher icon:

http://askubuntu.com/questions/330781/how-to-create-launcher-for-application

https://help.ubuntu.com/community/UnityLaunchersAndDesktopFiles

Posted in Data Science, Deep Learning, Machine Learning | Leave a Comment »

How to install PyGame using Anaconda

Posted by Diego em Junho 14, 2016

 

Search the package using:

binstar search -t conda pygame

 

package

 

 

Install the package for the desired platform:

 

conda install -c https://conda.binstar.org/prkrekel pygame

 

 

package2

Posted in Python | Leave a Comment »

Docker Containers on Windows

Posted by Diego em Fevereiro 23, 2016

This post is a follow up on kaggle’s “How to get started with data science in containers” post
I was having some trouble setting up the environment on windows so I decided to document my steps\discoveries.
I also (like to think that) I improved the script provided a little by preventing multiple containers from being created.

 

As the post above indicates, the first thing you have to do is to install Docker: https://docs.docker.com/windows/

Because the Docker Engine daemon uses Linux-specific kernel features, you can’t run Docker Engine natively in Windows. Instead, you must use the Docker Machine command, docker-machine, to create and attach to a small Linux VM on your machine.

The installer is really strait forward and after it is completed, it will have a link to the Quick start Terminal in your desktop. When you run it, the “default” machine will be started, but as is mentioned on the kaggle’s post, “ it’s quite small and struggles to handle a typical data science stack”, so it walks you through the process of creating a new VM called “docker2”. If you followed the steps, you will be left with two VMs:

 

image

 

To avoid having to run

docker-machine start docker2
eval $(docker-machine env docker2)

to change the VM every time you launch your CLI, you can edit the start.sh script it calls and change the VM’s name so it will also start the docker2 VM

 

image                          image
     

 

image

 


When you login you should be in your user folder “c:\users\<your_user_name>.

You can see that I already have Kaggle’s python image ready to be used and that I have no containers running at the moment:

 

image

 

 

To make thing simpler (and to avoid seeing all the files we normally have on our user folder), I created and will be working inside the _dockerjupyter folder.

Inside that folder, I created a script called StartJupyter, which is based on the one found on the tutorial, but with a few modifications.

There is of course room for improvement on this script – like making the container name, the port and maybe the VM name (in case you chose anything else than “docker2”) parameterized.

 

CONTAINER=KagglePythonContainer

RUNNING=$(docker inspect -f {{.State.Running}} KagglePythonContainer 2> /dev/null) #redirect to STDERR if can't find the KagglePythonContainer (first run)

if [ "$RUNNING" == "" ]; then  
	echo "$CONTAINER does not exist - Creating"
	(sleep 3 && start "http://$(docker-machine ip docker2):8888")&	  
	docker run -d -v $PWD:/tmp/dockerwd -w=/tmp/dockerwd -p 8888:8888 --name KagglePythonContainer -it kaggle/python jupyter notebook --no-browser --ip="0.0.0.0" --notebook-dir=/tmp/dockerwd
	exit
fi


if [ "$RUNNING" == "true" ]; then  
	echo "Container already running"
else
	echo "Starting Container" 	
	docker start KagglePythonContainer	
fi 

#Launch URL
start "http://$(docker-machine ip docker2):8888"  

 

The two main modifications are the “-d” flag that runs the container on the background (detached mode) and the “– name” that I use to control the container’s state and existence. This is important to avoid creating more than one container, which would cause a conflict (due to the port assignment) and leave it stopped. This is what would happen:

image

 

The ”-v $PWD:/tmp/dockerwd” will map your current working directory to the “:/tmp/dockerwd” inside the container so Jupyter’s initial page will show the content of the folder you are at.

Running the code will create and start the container in detached mode:

 

image

 

It will also open the browser on your current working directory where you’ll probably only see the StartJupyter script you just ran. I’ve also manually added a “test.csv” file to test the load:

 

image

 

By creating a new notebook, you can see the libraries’ location (inside the container), see that the notebook is indeed working from /tmp/dockerwd and read files from your working directory (due to the mapping made) :

 

image

 

 

Now, since we started the container in detached mode (-d), we can connect to It using the exec command and by doing so, we can navigate to the /tmp/dockerwd folder, see that our files are there and even create a new file, which will of course be displayed on the Jupyter URL:

docker exec -it KagglePythonContainer bash

 

image

 

 

We can also see that we have 2 different processes running (ps –ef):

FYI: you will need to run

apt-get update && apt-get install procps 

to be able to run the ps command

 

image

 

 

At last, as mentioned before, the script will never start more than one container. So, if for some reason if you stop your container (if your VM is restarted for example), the script will just fire up the container named “KagglePythonContainer”, or it wont do anything if you call the script with the container already running. In both cases the Jupyter’s URL will always be displayed:

image

 

 

In order to understand docker, I highly recommend these tutorials:

https://training.docker.com/self-paced-training

Posted in Data Science, Docker, I.T., Python | Leave a Comment »

Dynamic Parameters on Tableau

Posted by Diego em Outubro 29, 2015

 

This is my solution to deal with Dynamic Parameters on Tableau.

Just to recap, the problem is related to the fact that, once a parameter list is populated on Tableau Desktop and the workbook is deployed to the server, is it not possible to update the list if new values are required.

 

I should start by saying that this is not a simple solution. 

I mean, the code actually is but unfortunately there is more to it, mainly because it will require you to republish the dashboard using tabcmd (or any other means) each time you want to change the parameter.

But that’s what I’ve got until Tableau decides to support this so requested functionality.

 

 

My idea is to use python to access the workbook’s XML, navigating the tree to the desired parameter (on my example called “MyParameter”), clear it’s values (except for the value “None” – this is my particular use case and can be removed very easily) and then inject the desired new values based on a list.

 

 

This is a before\after example – as expected, the list has been updated:

 

clip_image001clip_image003

 

 

 

 

And here is the core code:

def ParameterUpdate(envname, dashboardname):
    #the 5 lines bellow are optional. It can be used if the Source\Destination Dashboards are expected to be on a
    #different path than the one the python file is executing, which is often the case, but not for this example:
    configfile = resource_filename(__name__, 'tableau.ini')
    config = ConfigParser.ConfigParser()
    config.read(configfile)
    inputdir = config.get(envname, "InputDirectory")
    outputdir = config.get(envname, "OutputDirectory")

    #reads the twb file
    tableaufile = ElementTree.parse('{0}/{1}.twb'.format(inputdir, dashboardname))

    #gets a list with the new parameters
    new_parameter_list = __getNewParameterlist()

    #gets the root XML
    fileroot = tableaufile.getroot()

    #finds the parameter called "MyParameter"
    xmlsection = fileroot.find("datasources/datasource/column[@caption='{0}']/members".format("MyParameter"))

    #Inject the new list into the XML file
    __injectParameters(xmlsection, new_parameter_list)

    newxmlfile = ElementTree.tostring(fileroot, encoding='UTF-8', method='xml')

    filelocation = '{0}/{1}.twb'.format(outputdir, "Output")
    with open(filelocation, 'w') as text_file:
         text_file.write(newxmlfile)

 

 

 

Of course, this solution requires a whole messaging system between whatever process triggers the parameter update, the python code and the tab cmd that deploys the dashboard, which I won’t be covering here because it is out of this post’s scope.

 

The complete code and a working examples can be fully downloaded here.

 

 

OBS:

Due to the nature of ElementTree, this code only works on python 2.7 (Not sure about latter versions – I’d imagine it does; pretty sure it doesn’t work on 2.6)

 

OBS2:

I have been recently shown by one of the product managers on Tableau this other workaround using java script on the server:

https://www.youtube.com/watch?v=6GlNxEN1Guw

Which I’d honestly consider hadn’t I spent time writing my own solution.

Posted in Python, Tableau | Leave a Comment »