Tech C**P
14 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
How to add authentication to MongoDB?

At first you need to create an admin user, so bring up a mongo shell by typing mongo in your terminal and hit enter. The database that users are stored is admin, so switch to admin database:

use admin


Now by using createUser database method we will create a user called myUserAdmin:

db.createUser(
{
user: "myUserAdmin",
pwd: "1234qwer",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)

Disconnect the mongo shell.

The important note about mongo is to run it by --auth argument, otherwise authentication would not work:

mongod --auth --port 27017 --dbpath /data/db1

#mongodb #mongo #auth #authentication #create_user
What is shard and replication in mongoDB? What is their differences?

MongoDB has 2 concepts that may lead even intermediate programmers to confusion! So let's break it down and explain both in depth.

1- Take a deep breath. :)

2- Replication: replicate means reproducing or making an exact copy of something. In MongoDB replication, mirror all data sets into another server. This is process is used for fault tolerance. If there are 4 mongo servers and your dataset is 1 terabyte, each node in replica-set will have 1 terabyte of data.
In replica-set there is ONE master (primary) node, and one or more slaves (secondary). Read performance can be improved by adding more and more slaves, but not writes! Adding more slaves does not affect writes, that's because all writes goes to master first and then will be propagated to other slaves.

3- Sharding: sharding on the other hand has completely a different concept. If you have a server with 1 terabyte of data and you have 4 servers, then each nore will have 250 gigabyte of data each. As you may have guessed it is not fault tolerant because each part of data resides in a separate server. Each read and write will be sent to the corresponding section. So if you add more shards, both read and write performance will be improved in the cluster. When one shard of the cluster goes down, any data on it is inaccessible. For that reason each member of the cluster should also be a replica-set, but not required to.

4- Take another deep breath, and let's get back to work.

#mongodb #mongo #shard #replica #replication #sharding #cluster
Migrate a running process into tmux

reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don’t want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

sudo apt-get install -y reptyr    # For Ubuntu users

Send the current foreground job to the background using CTRL-Z.

List all the background jobs using jobs -l. This will get you the PID.

jobs -l
[1] + 16189 suspended vim foobar.rst


Here the PID is 16189.
Start a new tmux or screen session. I will be using tmux:

tmux


Reattach the background process using:

reptyr 16189

If this error appears:

Unable to attach to pid 16189: Operation not permitted
The kernel denied permission while attaching


Then type in the following command as root.

echo 0 > /proc/sys/kernel/yama/ptrace_scope

#reptyr #tmux #screen #pid
1. List all Open Files with lsof Command

> lsof
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 253,0 4096 2 /
init 1 root rtd DIR 253,0 4096 2 /
init 1 root txt REG 253,0 145180 147164 /sbin/init
init 1 root mem REG 253,0 1889704 190149 /lib/libc-2.12.so

FD column stands for File Descriptor, This column values are as below:
- cwd current working directory
- rtd root directory
- txt program text (code and data)
- mem memory-mapped file


To get the count of open files you can use wc -l with lsof like as follow:

lsof | wc -l


2. List User Specific Opened Files

lsof -u alireza
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1838 alireza cwd DIR 253,0 4096 2 /
sshd 1838 alireza rtd DIR 253,0 4096 2 /

#linux #sysadmin #lsof #wc #file_descriptor
Mastering Linux Shell Scripting

#ebook #book #shell #scripting #linux #pub
Today we encountered slowness on MongoDB that caused all the infrastructure to get affected. The problem was that slowness on some specific mongo queries caused all the other queries to wait. YES we use index and YES we used explained on those queries and saw that those queries are using index. Now to mitigate the issue we had to kill very slow find queries until we fix the issue.

The function below kills slow queries:

function (sec) {db.currentOp()['inprog'].forEach(function (query) {     if (query.op !== 'query') { return; }      if (query.secs_running < sec) { return; }        print(['Killing query:', query.opid,             'which was running:', query.secs_running, 'sec.'].join('   '));     db.killOp(query.opid); })}


We need to save this query in mongo itself and run it directly. To save the above function in mongoDB use db.system.js.save:

db.system.js.save({_id:"kill_slow_queries", value:function (sec) {db.currentOp()['inprog'].forEach(function (query) {     if (query.op !== 'query') { return; }      if (query.secs_running < sec) { return; }        print(['Killing query:', query.opid,             'which   was running:', query.secs_running, 'sec.'].join(' '));     db.killOp(query.opid); })} })


I will explain the above function parts in a different post. Now you need to load server scripts and then run it:

db.loadServerScripts()
kill_slow_queries(20)

The above query kills queries that has taken longer than 20s.

NOTE: you can create a shell script and run it periodically using crontab until you fix the slowness on your server.

#mongodb #mongo #function #kill_slow_queries #currentOp
MongoDB has a top utility like top linux command that displays how much time spent on read, write and total on every name space (collection).


To run mongotop you just need to run:

mongotop


The output is something like below:

root@hs-1:~# mongotop
2018-01-09T13:42:42.177+0000 connected to: 127.0.0.1

ns total read write 2018-01-09T13:42:43Z
users.profile 28ms 28ms 0ms
authz.tokens 7ms 7ms 0ms
mielin.obx 3ms 3ms 0ms
conduc.contacts 1ms 1ms 0ms
admin.system.roles 0ms 0ms 0ms


The above query will run every second, to increase the interval use mongotop YOUR_INTERVAL_INSECOND.

If you want the result in json use mongotop --json.

If you want to return the result once and exit use mongotop --row-count

#mongodb #mongo #mongotop #read #write
On previous posts we explained about query slowness. Here we try to explain different parts of the function.

db.currentOp: in progress operations in mongoDB is displayed by this command. The response of the command is in json format, so you
can use command like db.currentOp()['inprog']. The response has many useful informations like lock status, numYields and so on.
The part we are interested in is opid part. opid is the pid number of the query operation. op section of each operation shows the type of the query. It can be an internal database command, insert command and or query. secs_running of the operation is the part that we can check whether a query has taken a long time or not. It is in second.


db.killOp : killing an operation is just as simple as giving the opid number to killOp as below:

db.killOp(6123213)

This is all we've done in previous posts, to kill slow queries in mongoDB.

#mongodb #mongo #currentOp #killOp #opid
See live disk IO status by using iostat:

iostat -dx 1

The output has many columns. The part I'm interested in for now is r/s which refers to read per second and w/s which is write per
second. To see size per second in read and write see columns rkB/s, wkB/s in their corresponding order.

NOTE: if you don't have iostat on your linux os install it on debian by issuing apt-get install sysstat command.


#linux #debian #iostat #read_per_second #write_per_second #sysstat
Benchmark disk performance using hdparm & dd.

In order to get a meaningful result run the test a couple of times.


Direct read (without cache):


$ sudo hdparm -t /dev/sda2
/dev/sda2:
Timing buffered disk reads: 302 MB in 3.00 seconds = 100.58 MB/sec


And here's a cached read:


$ sudo hdparm -T /dev/sda2
/dev/sda2:
Timing cached reads: 4636 MB in 2.00 seconds = 2318.89 MB/sec

-t: Perform timings of device reads for benchmark and comparison
purposes. For meaningful results, this operation should be repeated
2-3 times on an otherwise inactive system (no other active processes)
with at least a couple of megabytes of free memory. This displays
the speed of reading through the buffer cache to the disk without
any prior caching of data. This measurement is an indication of how
fast the drive can sustain sequential data reads under Linux, without
any filesystem overhead. To ensure accurate measurements, the
buffer cache is flushed during the processing of -t using the
BLKFLSBUF ioctl.

-T: Perform timings of cache reads for benchmark and comparison purposes.
For meaningful results, this operation should be repeated 2-3
times on an otherwise inactive system (no other active processes)
with at least a couple of megabytes of free memory. This displays
the speed of reading directly from the Linux buffer cache without
disk access. This measurement is essentially an indication of the
throughput of the processor, cache, and memory of the system under
test.


You can use dd command to test your hard disk too:


$ time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm ddfile

rm ddfile removes the test file created by dd command of=ddfile. of param stands for output file.


These are some useful and simple disk benchmarking tools.

#linux #benchmark #hdd #dd #hard_disk #hdparm
You can see both direct and cache read using command below:

$ sudo hdparm -Tt /dev/sda

/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec

#linux #benchmark #hdd #hdparm #sda
دوستانی که با الگوریتم زیر آشنایی دارند و چهت پروژه ارشد میتونند کمکی کنند لطفا با کاربر زیر در تماس باشند (هزینه توافقی):

Metropolis–Hastings algorithm (Markov chain Monte Carlo (MCMC) method)

👤 @shararehadipour
MYSQL insert

If you have bulk scripts for your email/sms like me, and you are sending to thousands of users, you will difintely will get stuck in the middle of the bulk notification or it will be very slow and unacceptable. First of all you must initiate only ONE mysql connection. If you are inserting your data one by one, you're dead again! try to use executemany that will insert data into mySQL in bulk not one by one:

client.executemany(
"""INSERT INTO email (name, spam, email, uid, email_content)
VALUES (%s, %s, %s, %s, %s)""",
[
("Ali", 0, '[email protected]', 1, 'EMAIL_CONTENT'),
("Reza", 1, '[email protected]', 2, 'EMAIL_CONTENT'),
("Mohsen", 1, '[email protected]', 3, 'EMAIL_CONTENT')
] )

Other note for bulk insertion is to avoid disk IO in case possible, and use redis, memcached or so on for inserting some data like user's phone or emails. It will tremendously improve performance of your bulk script.

#python #mysql #executemany #redis #bulk #email #sms
It's Dangerous!

Today I had to implement email unsubscription and for that I had to pass some data alongside the link as a token in emails. The best candidate for this scenario is itsdangerous. If I don't use it have to store tokens in a redis DB and link those tokens with received token from emails as ubsubscription link. These all adds complexity.

Sometimes you just want to send some data to untrusted environments. But how to do this safely? The trick involves signing. Given a key only you know, you can cryptographically sign your data and hand it over to someone else. When you get the data back you can easily ensure that nobody tampered with it.

Granted, the receiver can decode the contents and look into the package, but they can not modify the contents unless they also have your secret key. So if you keep the key secret and complex, you will be fine.

Internally itsdangerous uses HMAC and SHA1 for signing by default and bases the implementation on the Django signing module. It also however supports JSON Web Signatures (JWS). The library is BSD licensed and written by Armin Ronacher though most of the copyright for the design and implementation goes to Simon Willison and the other amazing Django people that made this library possible.


Example Use Cases:
- You can serialize and sign a user ID for unsubscribing of newsletters into URLs. This way you don’t need to generate one- time tokens and store them in the database. Same thing with any kind of activation link for accounts and similar things.

- Signed objects can be stored in cookies or other untrusted sources which means you don’t need to have sessions stored on the server, which reduces the number of necessary database queries.

- Signed information can safely do a roundtrip between server and client in general which makes them useful for passing server-side state to a client and then back.


To install it using pip:

pip install itsdangerous

Sample code:

>>> from itsdangerous import URLSafeSerializer
>>> s = URLSafeSerializer('secret-key')
>>> s.dumps([1, 2, 3, 4])
'WzEsMiwzLDRd.wSPHqC0gR7VUqivlSukJ0IeTDgo'
>>> s.loads('WzEsMiwzLDRd.wSPHqC0gR7VUqivlSukJ0IeTDgo')
[1, 2, 3, 4]

#python #itsdangerous #URLSafeSerializer
What is cronjob?
cron is a unix, solaris, Linux utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon.

What is crontab?
Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at specified times.

What is a cron job
Cron job or cron schedule is a specific set of execution instructions specifing day, time and command to execute.

To see list current cronjobs use crontab -l.

Edit crontab file, or create one if it doesn’t already exist by issuing the command below:
crontab -e

It may get opened by nano by default. In case you want to change your default editor for crontab use the below command:
export EDITOR=vim

NOTE: if you want to persist this data you have to put the above export command inside of ~/.bashrc


The general form of a cronjob is like below:

* * * * *   command to be executed
In total we have 5 stars. From left to right, first star is the minute that you want to run your cronjob at (min (0 - 59)).

Second star is hour (0 - 23).

Third star is day of month (1 - 31).

Fourth star refers to month (1 - 12).

And last star which refers to day of week (0 - 6). Be careful that 0 is sunday!


Now let's create a sample cronjob that reset our eshop service at 22:00:00 everyday:

0 22 * * * svc -k /etc/service/eshop


The other stars say that run this script every day, every day of month and every month.

#linux #cron #cronjob #crontab