A new digital assets exchange

Gets funding in a non-usual way

An IPO. Nothing new.

Business: provides services in trading digital assets. It was done already such a business.

Amount: 117 millions US dollars. It was done before, such amount is not unachievable.

So what is new?

SEC (Securities and Exchange Commission)-approved IPO is the first of its kind: investors can pay in Bitcoin (BTC), Ether (ETH), and USD Coin (USDC) – these are crypto-currencies.

Since launching the IPO, INX (the start-up company) raised $7.5 million by Sept. 10, the firm’s representatives said.

More than 3,000 retail and accredited investors registered for the INX token offering in the first three days.

Yes, that’s right. You read it correctly: a token. This is a new form of investment instead of shares or bonds or other equity form.

For those unfamiliar, a token is a form of digital assets that stays on a blockchain (a specific type of distributed databases network), is transferred back and forth over a blockchain and is issued electronically using a smart contract (a piece of software that is a feature of the blockchain).

So, no more need for papers, approvals, stamps, formalities and other similar stuff. Also, no more need for brokers. Investors can trade tokens using digital assests exchanges.

More precisely, this is a hybrid of IPO and security token offering (STO) that is registered with the SEC (a registration statement relating to the offering of these securities was declared effective by the SEC on Aug. 20), allowing everyday investors to legally participate in it.

The funds raised from the sale of INX tokens will be used to launch a multiservice digital asset platform. A regulated crypto trading platform for crypto, security tokens and their derivatives is intended to be created, as well as to launch a cash reserve fund.

STO (Securities Token Offering): Why is this a big deal anyway?

Because crowdfunding in a digital form takes place now in a regulated form.

Why was not regulated so far?

Because innovation was a step ahead of regulatory activities and this is since ever I would say. Blockchain as innovation makes no exception. Yes, the form not regulated was ICO (Initial Coin Offering)

In order to understand what an STO is, one must first understand ICO. The latter refers to a token offering from a company or organization in order to raise capital for a project. Buyers are issued with digital tokens. Unfortunately, ICOs are largely unregulated, thus putting investors at risk.

Sources:

https://cointelegraph.com/news/inx-exchange-launching-sec-registered-hybrid-token-share-ipo-this-week

https://101blockchains.com/sto-vs-ico-the-difference

Web scraping. Get products prices of competition from their website. How to achieve the goal using a trendy programming language (i.e. Python) in 20 lines of code (part 1)

Web scraping is when one wants to copy the content of a web page (or targeted parts of an entire site) and this is done automatically by a programmed robot (or a pre-programmed application). I use Python (and an open source library) to achieve successfully potentially any web scraping task.

An interesting use of web scraping to get automatically (hourly) the prices of products sold by competition

In this article I presume the reader has a business that sells to consumers (B2C) and, as any normal business, it needs to diligently monitor the competition, having available data about prices of its competitors. I presume also that each competitor has a website (eCommerce website, for example) that displays the product prices for each and every product they offer to consumers.

Disclaimer: it is debatable whether web scraping is legal or ilegal. A recent case showed that web scraping is not clearly legal, although legal issues such as copyright infrigements or contract law breaches were not addressed in the respective case of law (as per the author, i.e. Mr Eric Goldman, a professor at Santa Clara University School of Law, where he teaches and writes about Internet Law, Intellectual Property and Advertising Law). Therefore, before to pursue with web scraping, I strongly recommend the reader to consult his/her lawyer.

I decline any responsibility whatsoever for any endeavour the readers might pursue upon reading this article. Use web scraping at your own risk!

What I think it is still legal: no competitor can forbid you at watching. You open your browser, enter the URL of competitor’s website, you watch at figures and take a pencil and write down what prices that website displays for a specific product. But this is a manual approach that obviously is far from an efficient one.

Remark: the case of law displayed in the Disclaimer mentioned above dealt with an automated tool. Apparently, that tool was the root-cause of the problem because the target website stopped working.

Let’s clarify one thing: I would never recommend anything like that. Blocking websites from proper functioning is not in my intentions.

Now, that we passed the disclaimer part, I would like to say a few words about what this article provides. I explain a very convenient, cost effective and flexible method to get public info from the reader’s competitors’ websites (without disrupting any target website) and build a database to be analysed further. Such database aids any business to assess competition and improve internal decisions.

Thus, data gathered from competition or from market falls into “business intelligence” category. It is normal to make use of it when drafting the price strategy or set the size of discounts you are willing to offer to consumers (in order to make a difference compared to your main competitors), all aiming at maximising your sales (volume) and profit.

Advantage: gathering data from the market that goes into company’s database to be available to your analysts is what keep a business more adaptable to market. You might decide to reduce some prices to a category of products and/or increase prices for another products.

Manual approach won’t work for say hundred or thousand of products. If an important volume of market data from competitors is to be gathered, the process is less efficient, because you need additional staff to do it manually (or you can do it with less staff, but it is more time consuming).

A pre-programmed application (or a robot) aimed to extract the right data at the right time has the following advantages:

  • extract data with the frequency desired (it can be even hourly), so you are updated with prices practiced by competition all the time;
  • no manual error;
  • data obtained can be saved in your database or in the format needed (i.e. “csv” – for example) ready to be imported in your company’s database;
  • the bunch of data obtained would allow for various models to be created, various scenarios of prices modifications, with corresponding sales volumes and related profit by product and/or total profit maximisation.

Now, the technical part: what’s behind the scenes at the competition? Web pages and HTML

Any website has a bunch of web pages. When we visit a web page, our web browser makes a request to a web server that sends back some files that tell our browser how to render the page for us. 

The files our browser receives fall into a few main types:

  • HTML format / language – here we have the main content of the page.
  • CSS (Cascading Style Sheets) that add styling to make the page look nicer.
  • JS (Javascript) files that add interactivity to web pages.
  • Images — in some image formats, such as JPG, or PNG, etc that allow web pages to show pictures.

HTML is the main focus for web scraping (because it has the content that we target to obtain).

HyperTextMarkup Language (HTML) allows you to do similar things to what you do in a word processor like Microsoft Word, namely make text bold, create paragraphs, and so on. HTML is not as complex as Python.

HTML consists of elements called tags. Wherever you see “<” followed by one or several other words and then a similar “>” that ends those words, then these are called “tags“.

Examples:

<p> – indicates a beginning of a paragraph

“<a” – indicates a link then “>”. It is followed by the description of link (and finally followed by </a> at the end)

<body> – indicates the start of web page body

<head> – indicates the start of the header section of web page

and so on.

Why I entered into such details? Because usually, any web scraping tool uses these tags as the main elements needed to identify and extract the main content we are interested in.

Technical part of web scraping: as a tool used Python is the main actor

One of the most liked programming languages as per this link is Python.

In the next article I will continue with Python and a well known open-source module (or library) written in Python. I will explain how to create and approach web scrapping aimed at extracting the relevant content from your competitor’s website.

Customer Revenue Optimisation platform

Use-case: CenturyLink (this company is a member of the S&P 500 index and the Fortune 500)

An interesting use of AI (Artifical Intelligence) for increase revenue due to a better use of available data.

Due to a long history of mergers and acquisitions, CenturyLink (a telecom company now offering a bunch of digital services such as communications, network services, security, cloud solutions, voice, and managed services), created a number of information silos.

This most likely happens for companies that suffer such changes and the higher the number of such changes the higher the number of such information silos in the final company.

The drawback is that such silos prevent sellers from accessing the necessary account information about customers.

CenturyLink turned to Customer Revenue Optimisation (CRO) platform (name of provider is not important as I am not wrtiting this article to sell something specific).

“They (i.e. sales people) need AI insights to understand the kinds of triggers telling them which customer they should reach out to, and then look for other buying signals that can be reasons to go back and contact the customer. We like the prospect of leveraging activity as a way to give our frontline managers insight into what to do next, and AI helps drive that.”

Within three months of implementing the CRO platform, CenturyLink developed $250,000 a month in recurring revenue from one opportunity manager implementation, and a 350% funnel increase from top 40 accounts with account planning, according to https://www.computerweekly.com/news/252483148/CenturyLink-digitally-revamps-sales-framework

A major benefit of the implementation was the high level of accuracy it gave CenturyLink to make sales forecasts, which helped sellers to refocus on deals that were more attainable.

For more information about what IT solution you need that aid sales, is cost effective and suited to your specific situation, I would provide advise to your business (see my contact details)

SaaS businesses seem to have a good evolution at stock exchanges these times

A Software-as-a-Service company offers software from the cloud to its users

I touched the issue of SaaS (“Software as a Service”) in this post, when I discussed the delivery form of this software, looking more like a services and a little bit about differences compared to classic licensing model.

Companies that offer SaaS seem to be afloat the water in this new economic context.

As any novelty introduced by a new model in IT, this gets traction for various reasons, one of them might be because avoiding unnecessary costs since the value proposition of a SaaS could be maybe similar to a classical software application, but as a plus it includes also smaller fees for licensing than the classic model. In addition, nothing is delivered phisically (no CDs, no DVS, nor memory sticks with compiled installation executables – and therefore no headache with this bunch of stuff that is hard to control by the vendor not to be multiplied without a license by the buyer).

An understandable reason might be the job cuts that would indicate that many businesses might consider to reduce their costs, and since cloud computing looks like a cheaper alternative, this would be a way to minimise cashing out during this virus crisis. Although cloud costs – if not monitored – could add to the bill a significant amount also.

Source: https://techcrunch.com/2020/05/08/saas-stocks-defy-gravity-amid-pandemic-record-job-losses/

Python on a VPS (Virtual Private Server) for those who do not know Linux

A convenient way and secure enough to code in python language on Windows using a Virtual Private Server

VPS (Virtual Private Server) on Windows is nowadays a convenient use to code in Python if you don’t know Linux. Why a VPS? If you do not want to buy a computer, also if you want a speedy internet connection and no suden interruption of electrical power or forgot to save the work, with 20 $ a month you can have a VPS with a hosting provider.

Especially if you have some processing needs that might take longer than expected (I have some that takes more than 1 day), obviously to do it on laptop is not what you want. Even though a desktop placed at home that runs for more than 12 hours is exposed to suden interruptions when children go from a corner of the room to another, or the cleaning tasks might go over the wire … you know what I mean.

I have already a VPS (used on Windows Server 2012) since one year already and it works like a charm with python.

Note: please note that Windows Server 2008 will come to end of support in 2020, so you might want to choose some superior version of Windows Server .

Install and configure VPS

Access to the VPS is needed to be done using a VPN (Virtual Private Network) or other encrypted connection. In this short article I will go with VPN and I assume you have a hosting provider of VPS that offers VPN within its package.

Because on Windows, the RDP (Remote Desktop Protocol) is the standard to acces remotely a Windows computer, I have checked security of RDP and I am not in favor of it, without additional compensating security, i.e. VPN.

Only connect through VPN before to connect to VPS. THIS “before” IS A MUST. Otherwise, you will be exposed to hackers because Remote Desktop Protocol has flaws (see this link and this link in Romanian).

VPN is still good to go.

The next secure settings I have put in place for my VPS (after discussing also with one of my friends Adi Rusu – thank you Adi!). In my opinion, they are secure enough in my best reasonable judgement, based also on my experience of 12 years in IT field and having an engineering background, plus a CISA (Certified Information System Auditor) certification.

Please feel free to use them, but like anything in life do not take my word for free. Check for yourself and make your own judgement.

Therefore, use them at your own risk (DISCLAIMER: all these settings do not warrant you will be free from security incidents).

  • Create a user that is not admin (username not JIM, nor JOE, nor ADMINPOWER, etc, but something random like “Tdte59&eg0Y7)df6”)
  • Allocate only Remote Desktop rights to this new user. This is the only user that will be used for accessing the VPS
  • Session 24h for this user only (to keep the work going on 24/7 and the scripts to avoid be stopped by ending of session)
  • Create a second user with admin rights having random characters as a username (not ADMIN, nor ADMINISTRATOR, etc but something like “Rgdy6AGDTnr6el&5e” (or similar)). The longer the better.
  • Passwords at least 16 characters (special, upper case and numbers) to all users. The longer the better.
  • I saved these passwords (and also those long usernames) in a txt file on a memory stick that I attach to my laptop for small duration of time (like 30 seconds or so). I use that text file only when I go remote in order to access my VPS and want to connect. After I connect to VPS, the memory stick did its job and therefore I remove it. This makes my life easier (and hard to hackers). I will explain: convenient because I can use copy/paste from that memory stick when starting Remote Desktop Protocol to connect to VPS and do not need to remember such long and complex passwords. Hard for hackers: because (i) long passwords (ii) random usernames (hard to guess) avoiding usernames like ADMINISTRATOR for which they can do brute force attacks when do IP guessing and possibly find your IP (iii) my laptop is free of malware as it is configured as explained at last bullet, as to protect the credentials to access the VPS in two ways: no intruders allowed as no malware on the laptop and no credentials of VPS stored in my laptop anyway, but on a memory stick that is off line (in my pocket) (iv) and last, but not least: just in case a worst case scenario occur, i.e. say my laptop will get a malware (highly unlikely as I do not browse low quality sites and do not click sneaky attachments received to my email inbox), until the things go worst there are indicators that I am sensitive to, like slow processing, weird behaviour etc. So, I am confident will be able to notice in time and take appropriate actions (like stop using that laptop for accesing the virtual private server, disinfection, etc).
  • Pay attention: do not use that memory stick on public computers! Use it only on your personal computer after you are sure it is safe (no malware or virus on it – see last bullet below on how to achieve that)
  • Last step: the default ADMINISTRATOR account on Windows Server has to be deactivated (otherwise, if you do not configure properly the Windows Server, this account will be bombarded with brute force attacks from untraceable sources using Remote Desktop Protocol by hackers who might go random guessing your IP – I have searched the Event Viewer and saw those trials – so, deactivation of default admin account was the solution, as well as creation of another user account with admin rights having a random username – see fourth bullet point above)
  • Needless to say, your laptop (or home desk computer) that you use to access your VPS needs Anti-Virus and Firewall (I would strongly recommend to go for a paid solution and avoid free antivirus software like Avast – because, for example, data from Avast antivirus users was sold to advertisers, who can then combine it with other data they have on your activities to track you in great detail – see this link). Plus regular – at least weekly – full scans (of your laptop, not VPS). Last but not least, use a normal user (no admin rights) when you browse the world wide web from laptop. Even though there are temptations to use a full power user account on Windows (when I want to install something on my laptop and avoid installing errors), I log on using a normal user account I created. Thus, I have Windows 10 with a user that has no admin rights that I use all the time. Each time I need to install something on my laptop, that new application requires admin password, so no harm, I have control over browsing, as well as I can install whatever I want. These settings avoid a malware from a website I browse from laptop – that I do not know if it is malware or not – to be installed in the background, because if it is something that really needs to be malicious, it will ask for admin credentials. Therefore, if there is something that pops-up requring admin password and I don’t know what it is, I deny it.

That would be all regarding VPS (no need for anti-virus, firewall for server as long as you are invisible to the world wide web). No web server (IIS, Apache, etc) needed. On the contrary, I would advise to uninstall them if they are installed by default. So, this way your VPS will be invisible to hackers. But take care to NEVER browse world-wide web from the server (why would you do that? as long as you have the laptop already and can browse from it – and RDP allows for sharing the clipboard, you can transfer that information using copy from the browser of your laptop and paste onto the server – over RDP).

Install Python on Windows

Go on the server (Windows Server on your VPS).

I use version 3.6.4 (and Windows Server 2012 is on 64 bits) since more than one year.

The next link is to be used for downloading Python 3.6.4 (from official python.org) – always install from official sites:

Now, about “pip”:

I would highly recommend to install a tool that will always be needed for your convenience, when you want to benefit from open-source libraries written in python that are abundant on internet.

The next option is to be chosen when installing Python:

That would be all.

Now you have all you need to start scripting in python on Windows.

Libraries (like for example free work of others that usually is comprised in modules), depending on what you need will be downloaded and installed easily using pip (the optional feature above).

Usually, these librariries are installed from the command line (Powershell is also a possibility, instead of classic – inherited from DOS – command line).

Thus, to use classic command line, type “cmd” at start and right click “Run as Administrator”.

To use pip when installing python libraries available on github or elsewhere (where they specify pip as installing tool) type as follows from command line:

C:\WINDOWS\system32> pip installopensource library name that you want” (or whatever you want*)

You will notice a process that will follow in your command line, like a download of that library and additional components. The final outcome of that process should be finalised with success. This is the validation you have “downloaded” that library and you may start using it with python.

This “pip” and using command line are the most used features and also convenient for me when I explore various work in python made by others (usually they are free and could be found in a significant number on github).

Hope you will find enjoyable testing python on windows and convenient using a virtual private server (while you will be secure enough).

____________________________________________

*) Still be cautious about “whatever you want”. I mean about open-source code you find on the web (not everything is checked by open source communities as for us to have the comfort we are free from malicious bugs).

ChowNow platform partners with Instagram

Los Angeles restaurant ordering based platform will add “Order Food” buttons and stickers. Local restaurants aided this way

Food pictures and videos on Instagram are widely used. Users will be able to easily order because buttons and stickers will link directly to ChowNow to complete order flow.

It’s a great marketing tool for restaurants, said Chris Webb (ChowNow’s CEO and co-founder) and an easy way for them to inform their customers that they are open for business — even if they may not have open tables.

Nice idea, especially as restaurants were hardly hit by this virus crisis.

Source: https://techcrunch.com/2020/04/15/instagram-partners-with-las-chownow-to-make-food-pics-and-stories-shoppable/

 

Fidelity analysts: Chinese companies to thrive post-COVID-19

China did shut down travel, western countries did not shut down economy during virus spread

My opinion is that diferent countries will recover on different timeframes after COVID-19. It depends on when the pandemy started to spread in the relevant country and, of course, the extent of spread.

Ironically, it seems that the most and first affected country might be, in the end, better than others.

Sources:

  1. https://www.investmentweek.co.uk/news/4012491/fidelity-analysts-chinese-companies-thrive-post-pandemic
  2. https://www.livewiremarkets.com/wires/will-china-lead-the-recovery

 

VPNs and Work From Home: Security under scrutiny in times of COVID-19

VPNs secures communication between company servers and employees’ devices, but end user devices are exposed if not secured enough. Also, other compensating measures need to be in place.

Courtesy of The Cyber Security Hub™ (TCSH), I would like to depict a link that TCSH had pointing to, citing that Work From Home(WFH) using VPN is not 100% secure, other risks existing in that activity (WFH). This article I wrote is for those who believe that VPN is all you need for a secure WHF. I extract selectivelly from the mentioned article only what I believe is critical in order for non-expert readers to grasp the essence.

Conclusions:

  1. So, BYOD (Bring Your Own Devices) is a high risk. At least, it has to be approved by the company which the employee works in. This device MUST be secure enough against penetration, so I would strongly recommend: (i) remove admin rights from this device (to the extent possible) and (2) anti-virus and firewall on them is a must (although recently I have learnt that such amunition is not enough against a skilled hacker).

2) The IT infrastructure support generally i.e. in normal time (not these times of pandemic crisis) around 30% of users that work remote. Generally. During COVID-19, if all users go remote, then ” Houston, we have problem!”. I mean, no organisation has envisaged so far that its staff will work from home in such a huge majority. Therefore, buying an additional equipment to support the increasing demand of users to work using VPN takes time (months). Configuration and integrating such new equipment into the existing infrastructure also requires time.

3) I will end up with this recommendation (excerpt from the article) that I fully agree with:

So, stay safe not only from COVID-19 virus, but also to avoid get viruses (electronic form) or other electronic malware from hackers.

For this purpose, employees need a secure device (as I have mentioned above, i.e. remove admin rights and have installed anti-virus and firewall software) and instructions regarding how to counter phishing attacks (about these I hope you have already introduced regular simulations as I have recommended in this article on LinkedIn previously) so when employees working from home and they are not supervised or no quick requesting and support link with IT department, they hopefully be able to apply at home that knowledge.

 

Ransomware mitigation with backups. It might work well for small or medium companies that cannot afford huge budgets for securing their systems

Provided that certain conditions are met

In a previous page I wrote about the balance that always has to be reached between costs in risks, including cyberattacks. So, I have expressed the opinion that there is no point in spending too much on some fancy security tool that covers a risk that, if occurred, it will cost the company far less than the cost of the mentioned tool.

Of course, this article is for those that are concerned more about how to restore the functionality of their servers, rather than sensitivity of data. For example, SMEs that are intermediaries dealing with wholesale trade of goods or transportation are probably more concerned about restoring their data rather than exposing publicly the data of its staff in case of a data breach (although data protection authorities lately use significant fines for data breaches – so, a trade-off between penalties and information security budget, whichever is less might need to be asessed).

This is the case with small companies, and some of medium-sized ones. Eventually, deleting all and starting from scratch, taking data from papers and input them in the company’s system, be it an ERP or other system might be less costly than securing its system investing a disproportionate budget.

However, if you have good back-ups in place, you don’t need to take it from scratch. Only since last back-up. Provided that several pre-conditions are met in respect of those back-ups.

Courtesy of Boardish – IT and Cyber That Speaks The Board’s Language the below twenty seconds video explains what happens in a ransomware (he he he – very funy! – but when real life hits it’s not funny anymore).

The back-ups will work to mitigate elegantly the ransomware attacks provided that:

  • you have back-ups defined according to your risk appetite and your organisation will have to delete all current live info from production environment (in order to delete also the malware from your systems because you don’t have the decryption key and might not want to pay for the ransom)
    • frequency of back-ups is to be defined according to how much back in time you want to repeat doing latest data entries in your systems (while remain in the risk parameters defined in your strategy and are virtually unaffected) as to be up-to-date and have latest customer transactions data available in your systems. This procedure supposes to take data from papers (contracts, written customer aplications, customer orders in written form). If to restore data from emails, this is a separate activity and I presume in this article that you have the email server hosted in the cloud (having the email server on premises without back-up on cloud will miss the opportunity to take data from customers from emails, so no back-up at all means data will not be recovered if the ranswom virus will spread on the email server as well)
    • the duration needed to input again the data since the last back-up (as to be up-to-date) does not take longer than needed (at least you have defined in advance a task force that is to be available upon request to do such data entries and have procedures in place how to do it)
    • the duration of reseting your systems / or reinstall all systems from the kits is not so long and is feasible to do it in the pre-defined time frame
    • last, but not least: testing regularly the back-ups will avoid surprises, namely when everything is recovered from back-ups but the systems do not work … or recovering from the back-ups is a messy process (so exercising regularly the procedures how to recover data is essential)
  • a second condition is to have back-ups decoupled from your live systems (unless you have a real-time back-up technology, in that case you have to have a third back-up that is done manually, decoupled from / live environment, namely on tapes or other independent systems that stay off-line all the time, like a safe storage of data if you like). Yep, you got it: the risk of having the back-up systems coupled (connected) on-line with your live systems (in order to do the back-ups automatically with predefined frequencies) has the downside that back-up systems might also be infected with the malware.

You know something? The above-mentioned pre-conditions are according to good practices and security standards, only they are more or less applied on various enterprises in the IT department. Real life is different than standards, I just hope you have in real life those actions already implemented. If not, and you are a small or medium-sized company that support your business operations significantly with the aid of whatever ERP or IT system, but cannot afford big budget for securing your company agaist cyberattacks like ransomware, you might need to consider implementing those recommendations I mentioned above.

In this context I would say you are probably free from the consequences of a potential ransomware that penetrates your organisation despite all security controls are in place.

In this happy-end situation you don’t have to pay for ransomware.

However, this does not mean security controls and protective actions to fight against cyberattacks are to be completelly ignored because ransomware is one type of attacks, but there are plenty out there ready to catch you unprepared (if there is some data at stake).

Nevertheless, as the duration of time needed to recover your business is the most important and critical / vital criteria to any organisation, I would say that the mitigation by back-ups as explained in this article (provided the conditions listed above are met) might work satisfactorily and will allow you to be quickly back in business.

Note: another assumption I made in this post is that the decision to pay for the ransom (or not) takes into consideration also the sensitivity of data. When I mentioned about cost and risk, presumably at the cost of producing the risk, all the costs, i.e. for example with fines from authorities (for example data protection) for leaking sensitive data outside the company (because securing company’s operations was not enough), or the other reputational risk like losing an important (or several important) customers have been quantified and the decision was taken in the direction to not pay for ransom.

API (Application Programming Interface), SOA (Service Oriented Architecture) and Microservices

What is meant by these terms?

From a definition of a service perspective:

In software development, a “service-centric” software application supposes to write code that gets exposed (typically over a network) via one of many interfaces.

These interfaces are the endpoints to business functionalities and regardless of the architectural pattern (SOA, Microservices), services tend to share the following attributes:

  • are self-contained
  • are “black boxes” to users of the service
  • models a set of activities with specific inputs and outputs

Why SOA?

Reduced complexity: in case lot of records are needed to serve a particular business or data requirement. Making multiple requests might suppose implementing processes or uses-cases more complex than they need to be.

Reduced risk: the classical (monolithic) development to serve data requirements might expose too much of the underlying data model.

Bottom line: SOA packages up functionality into endpoints, typically accessible at the enterprise level that is: easy to access for the business, reusable, can be used as building blocks for future applications. 

APIs and relation to SOA

SOA is more B2B Business solutions layer where when business need to pass data back and forth between different types of medium, API‘s are built, and business rules are built around that.

SOA is an architectural methodology. It is a way of specifying separation of responsibility from a business oriented point of view into independent services, which communicate by a common API (often but not necessarily by publishing events to a bus).

In recent years, a culture shift takes place in businesses and organizations, especially in the public sector. Thus, there have been recent drivers to open access to data service, often through public APIs which are available online.

API definition: a source code-based specification intended to be used as an interface by software components to communicate with each other.

Differences:

API = any way of communicating exposed by a software component.

SOA = a set of enterprise architectural design principles to solve scalability issues by splitting responsibility into services.

Microservices (architecture) – MSA

At a very high-level, microservices are an alternative way for architecting applications which offer a better way to decouple components within an application boundary. 

Maybe if microservices were rebranded as microcomponents it‘d be easier to understand.

In an application that implements microservices, the application boundaries or interfaces are no different to that of a traditional monolith application, the key difference is what happens behind the application boundary. 

Behind the service boundary, collections of independent microservices run in their own processes, all with their own individual set of APIs or web service endpoints that get exposed through the application boundary.

For complete decoupling, isolation, and independence, each microservice can use of its own data model that aligns with the domain objects being passed through it which helps improve stability and maintenance.

Microservice architecture is focused on multiple, independent, self-contained application-level services that are lightweight and have their own unique data model.

For CIOs / CTOs to make informed decisions:

In the followings, will present main attributes for each

Sources:

https://www.devteam.space/blog/microservices-vs-soa-and-api-comparison/

https://stackoverflow.com/questions/9496271/what-is-the-difference-between-an-api-and-soa

My research above would have been not complete without to try find some API Management providers in Romania.

A reliable source I believe is Gartner (Magic Quadrant). Gartner report: https://www.gartner.com/en/documents/3970166/magic-quadrant-for-full-life-cycle-api-management (excerpt below).

I found 5 leaders (who have the ability to execute and they are also visionaries) and took all of them (in the order as they appear in the Gartner document) into my research to see if any Romanian customer can get some API services in Romania from these. I think it is important to have a local distributor or partner (for reasons relating to easier troubleshouting, local / on-premises customisation, etc)

Google Apigee: not found a partner in Romania for Apigee (see this – no “Romania” word in the results of that search and this – a provider taken also from the results of searches, but who has nothing in Romania, but is present in other countries in the region around Romania)

Software AG: not found a partner in Romania (they coordinate from Poland) – see this

Mulesoft – it seems they have a partner, but did not see a specific focus on API (at least from the main page displaying this partner’s services I could not retrieve details). Thus, see this – pointing to Softvision, but on mulesoft site the link “Partner’s site” points to this website whereby one could find no link/details about API services.

IBM: yes, they have IBM Romania and the specific page for API Management is https://www.ibm.com/ro-en/cloud/api-connect

Axway: yes, they have a Romanian partner (EasyDO) and the specific page is https://www.easydo247.com/products-axway

Final conclusions: I saw that the banks in Romania have already contracted some API Management services (for example, no.2 bank in Romania, BCR has something from Axway as per the link mentioned at previous paragraph – see at the bottom, where I found also BNP Paribas as their client).

I could imagine that IBM would have also offered to Romanian companies their API Connect product, but did not find on their website specific Romanian clients using API Connect. Maybe they keep that list confidential.

I know that for banks and fintechs the PSD2 directive (in force since December 2019) requires that banks have to expose their data through APIs for fintechs as consumers of data.

Anyway, if you run a Romanian company that has a complex IT architecture and want to go for digitisation in 2020, flexibility in creating new products in order to satisfy customer needs, you might want to consider gradually build new products having a services-based architecture, i.e. SOA, microservices. Therefore, implementing some APIs (if you did not do that already) and managing them is the way.

In pursuing the above-mentioned endeavour, if I am required to express an opinion, I would recommend to go with one of the leaders from the Gartner Magic Quadrant.