April 24, 2023

Hermix - public sector sales analytics, B2G software

Hermix is the first analytics platform for public sector sales. 

We help companies understand & win public sector projects, with tender monitoring and market intelligence.

How we started, more than a year ago? We looked at our personal, direct experience, of more than 20 years, in doing sales for the public sector; mostly tenders to EU institutions and the European Commission, but also to national authorities. The Business-to-Government B2G sector is great, and it worked very well for us: stable market, lots of money, and lots of information - if you know where to look and how to read.

We asked ourselves: What worked, How, and Why it worked for us? And we noticed that data analysis and market intelligence are key success factors, and yet completely under-exploited. Everything is done manually: tender monitoring, market research, forms, papers, CVs, technical proposals, prices. 

We also made this astounding observation: public sector sales doesn't use big data analytics. Information is managed manually in emails and Excel files.

But in B2C/B2B, retail and consumer, data is king! Marketers rely heavily on billions of data points and on hundreds of tools: Google Analytics, Facebook Analytics, LinkedIn Sales Navigator, Plausible, Indicative, etc etc.

So we started to automate B2G, providing services such as:

* Tender monitoring, smart market watch, notifications.

* Big data analytics, deep market intelligence, actionable insights: where is the money, who buys & sells, what, where, how.

We've gone a long way since we started out in 2022.

We gathered a great team. We developed the technical platform. We launched Hermix.com. We tested, validated and evaluated the concept and tools.

We had ~350 meetings. We enrolled 217 users.

We signed quite a few contracts with serious, solid customers. We have reliable partners, such as Amazon, Tremend, Zetta, Westpole, Brayton, Hubspot. We received in kind contributions and support.

We won the EU Datathon award from the European Commission, and a prize of 25k. We received the Deloitte Impact Star. We signed a 250k regional R&D grant.

We listen to our customers and partners, every week. We get their feedback and requirements. We aim to understand their real needs. We focus on ergonomics, usability and on key user scenarios: Which are the daily pains of the sales managers and commercial directors, of bid managers and presales architects?

And then we design crucial pain-killers for these needs.

We improve our data algorithms continuously. We analyzed millions of historical government contracts, tenders and payments, hundreds of thousands of authorities and contractors. We import and clean new data daily.

We release 2-3 new major features per month. We use the most modern & fancy technology out there, but we remain function-driven.

We make sense of public sector sales.

Contact me for your test drive.

April 07, 2023

Norming and standardization of language and tools

 The reason for standardization is efficiency.

We norm everything: forms, tools, habits, customs, religions, law, houses, screws, pipes, measures, science, math.

All groups and societies go through the famous cycle: storming, forming, norming, performing.

Norming reduces innovation, but increases efficiency.

The reason for norming language and speech is efficiency in communication.

All societies norm communication. Strong centralized states enforce strong centralized rules faster, sometimes violently. They enforce uniform taxation, d├ęclarations, addresses, names, jobs, qualifications, language, writing. They kill dialects and local freedoms.

Very centralized organizations have huge bureaucratic overheads. Sometimes, the bureaucratic overheads are controlled by political pressure, elections or shareholders. 

Sometimes bureaucracies are destroyed violently, through bankruptcy, revolution or war.

Then, society resets. And the cycle starts over: storming, forming, norming and performing.

March 31, 2023

Post-rationalization of decisions

We often take subconscious decisions, that we then try to rationalize, by finding or inventing logical arguments.

It is what Daniel Kahneman called "thinking fast". 

The first example comes from marketing, i.e the post rationalization of the buying decision. Studies show that people make decisions subconsciously, after which they try to rationalize them, with logical arguments such as "it's cheaper", or "it's expensive, but the quality is higher", or "it's worth taking care of myself", or "it's the best quality-price ratio", or the infamous "I know I don't need this, but it's on sale".

A similar process is described by Radu Umbres for ethical norms. Apparently, we don't have moral principles from which we derive ethical norms, but instead we have ethical norms, and then we create principles to support these norms. With examples such as: because of the principle that women should have the right to dispose of their own body, we support abortion, but we don't apply the same principle to support surrogate motherhood or prostitution. Similarly, we consider morally acceptable that men should donate sperm, but not women donating ovules.

The 3rd example comes from business.

Although in this area we try to use objective decision-making tools (decision matrix, decision tree, balanced scorecard, Cocomo, risk analysis, etc.), in practice we adjust these tools to fit our intuition. 

20 years ago, when I was a young manager, I built and used a tool for calculating the salaries of staff, using a set of variables such as experience, skills, performance, education, foreign languages, etc. At a certain moment, I went to my boss and told him that I have a problem: the tool suggests to increase the salary of an incompetent colleague. The manager said: a good instrument should help you take the right decisions. If it doesn't, i.e. if the results of the instrument don't match your expected results, then you need to recalibrate the instrument. Accordingly, I adapted the tool by introducing a new "over-ride" variable to get the desired result.

Same happened during my PhD, when I discovered that an instrument is "correct" if its results are useful. Utility is the best measure of the scientific validity of a tool. In fact, my research consisted of designing and calibrating a set of useful tools for the management of complexity.

The same subjectivity affects all tools, including those for deciding investments or acquisitions. I recently heard people talk about "gut feeling" in investment decisions. The "gut" factor is in fact embedded in the whole decision process. Business tools are always recalibrated to produce desired results. 

Even when we over-rationalize the decision process, when we design tools that are highly objective, when we try to isolate personal bias from decisions - we will never eliminate subjectivity completely.

Subconscious decisions work a lot of times. 

Calibrating instruments to match desired results is not a bad thing, if it works. But it is important to be aware of how this process works, even when it works. And the calibration process should be as controlled as possible.

March 30, 2023

The metaverse is dead

I tested Meta's metaverse 2 years ago.

There were no business use cases or applications. No serious videoconferencing solutions, no integration with major business applications or tools.

No serious entertainment applications. No 3D movies. Very few and primitive games.

(Photo: NightCafe)

It's time to admit the metaverse is dead. The Metaverse Is Quickly Turning Into the Meh-taverse. Microsoft and Disney gave up. Facebook, after renaming to Meta to ride the metaverse wave, but no progress in 2 years, simply stopped talking about it and starting talking about AI. It is understandeable, since their stock fell from $370 in Oct. 2021 when they anounced Meta, to $115 in Dec. 2022.

The metaverse was promoted too early. While it promised to solve very serious problems, the technology isn't there yet, and there is yet no content or ecosystem.

I would love to watch 3D movies in a 3D world - but there are no 3D movies available. Neither on YouTube, nor on Netflix, nowhere.

I would love to have a 3D collaboration space with my colleagues, but the experience has to be better than traditional videoconference, Office, Slack, Teams or Meet. Now it is worse. It has low video quality, the avatars are a joke compared to webcams, and there are no collaboration or productivity tools. Also, there is no integration with traditional videoconference or collaboration tools. No (simple) way to access documents, to send emails, to connect to MS Teams, Google Meet or Zoom, or even to use easily a web browser. You cannot share passwords. There is nothing available from what works nowadays so seamlessly on a normal computer or smartphone. There is no integration with computers or smartphones.

Yes, I would love to have virtual learning spaces and 3D eLearning simulations, but there are none available.

I would love to visit a 3D museum or castle, but there is no good museum available in the metaverse. E.g. the museum of Anne Frank is composed of black-and-white photos. Why would anyone go to a 3D metaverse museum to see 2D bw old photos?

I would love to play a Call-of-Duty game in 3D, but there are almost no games available in 3D. I only found a good FPS, not great - so that  I couldn't even finish it.

Also, the input devices are poor, their sensitivity is poor, the applications are somewhat buggy. There is no possibility to use a keyboard or even swipe. Pointers often malfunction, the sensors get desynchronized often. 

The metaverse will need a few more years to get serious and will probably come back, with something real next time. 
The same happened with AI: there were quite a few decades between the first AI hype and the current revolution.

Brief introduction to ethical AI and AI ethics

Every few years there is a new scare of new technology. It happens periodically. It targets new developments such as robots, self-driving cars, vaccines, or genetic manipulation.

Today people are puzzled by new AI technology such as ChatGPT. And society exhibits reactions such as proposals to ban new technology from certain environments such as from schools, or even banning technology altogether until we understand it better. Of course, some of this is provoked by competition, e.g. Google’s DeepMind, the guys that developed AlphaGO, just signed a petition to ban the development of large language models such as GPT-4, while starting new internal research projects to develop new large language models themselves.

So we start talking about ethical implications, and legal responsibility related to the new technology. 

But, the good news: ethical AI is not a new concept. Here is a quick introduction.

(photo by Fotor)

Theoretical research

There is already theoretical research in ethical AI, AI ethics, and artificial awareness. We are not talking about the responsibility of machines, of course (there is no machine liability, since there is yet no AI awareness, intentionality, or free will). 
But we talk about ethical implications, where the liability lies, and how we can categorize and analyze ethical implications.
In fact, we can categorize the legal and ethical implications of AI even when there is no machine responsibility. E.g, (Moor, 2006) classifies AI ethics into 4 levels of artificial moral agents:
  • AMA-1. Built-in ethical issues. Any technology includes some underlying ethical decisions, related to how it is used or designed.
  • AMA-2. Machines that incorporate pre-programmed ethical decisions. For example, prohibiting children from accessing the internet.
  • AMA-3. Machines that make ethical decisions based on automated algorithms, such as self-driving cars.
  • AMA-4. Intentionality, moral principles, self-conscientiousness.
There is also theoretical research into artificial awareness. An interesting topic is that, as Padin Fazelian said, everything comes down to the basic unanswered question: "what is human consciousness". Machines could even tell us something new about human awareness itself. 
Since ethical decisions are part of any technology, Katleen Gabriels proposes, in "Conscientious AI. Machines Learning Morals", a framework where ethical implications are analyzed even before the design of new technology.
Besides these theoretical implications, there are also practical issues such as explainability and interpretability. AI is a complex technology and we are not always fully able to explain how it works. While this behavior is sometimes seen as miraculous, it nevertheless poses practical problems such as difficulty in debugging malfunctioning products.
Legal implications and frameworks

There are UN recommendations already related to the Ethical Use of Artificial Intelligence. 

There was also an international panel and debate. 

According to UNESCO, "In November 2021, the 193 Member States at UNESCO’s General Conference adopted the Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject. It will not only protect but also promote human rights and human dignity, and will be an ethical guiding compass and a global normative bedrock allowing to build strong respect for the rule of law in the digital world."

There are also studies and debates organized by the European Commission, ever since 2019. These produce results and deliverables such as Ethics Guidelines for Trustworthy AI. The proposal for regulation made by EC is currently with the European Parliament: "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts - COM/2021/206 final".

The Commission was saying in 2021: "The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  • a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;

  • a civil liability framework - adapting liability rules to the digital age and AI;

  • a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive).

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard".

In conclusion,

We need to have a discussion on ethical AI and the ethical implications of AI. This discussion is not new, and is not conclusive. But banning technology is often impractical and sometimes impossible. When a technology becomes ubiquitous, banning it might even be considered absurd. Such measures should not be taken based on commercial interests, nor based on irrational social scare.

Hermix - public sector sales analytics, B2G software

Hermix is the first analytics platform for public sector sales.  We help companies understand & win public sector projects, with tender ...