top of page
Large Knowledge Model - Introduction to the Process

Introduction to the Large Knowledge Model (LKM) Process

All the phases of the Large Knowledge Model (LKM) Process were a significant technological challenge at their time. From MAchine Readable Cataloging (MARC) developed by the Library of Congress in the 1960s, to the Kurzweil Reading Machine (OCR Scanner) developed in the 1970s. Other hardware and software related products were created from the 1980s to the 2020s and have evolved to the point where AI has developed the Large Language Model of today.

While American Data Processing, Inc. had no direct part of developing these technologies, we have reported on them and used them widely and wisely to develop many print, CD-ROM, and web-based products. While it may seem to some that technology moved quickly, it did not! Each achievement required dedicated teams of people for many years to develop the first product. Many more people worked to get to the next level with significant improvements to become widely available.  Most Artificial Intelligence (AI) applications have always seemed to be “5 years away.”  AI has endured multiple ‘AI Winters’, only to crawl out of the cold to try again each time with new teams, technologies, and products, to the point where organizations and people are now trying to apply it everywhere.

American Data Processing, Inc. is dedicated to bringing the best of Human Intelligence through the use of AI, LLM and other related resources. Currently we will not be generating our own content, only abstracting, and synthesizing the written content of over 100,000 authors, scholars, and experts with clear bibliographic citations.  Strict editorial controls will be enforced, and no one will be able to add, delete or change content without submitting their information in the form of an Academic or Scholarly Book that has been vetted through a publisher's academic editorial process and accepted and cataloged by the Library of Congress*

In the future, the Large Knowledge Model will generate content that will be marked as Predictive Knowledge (PK). The Predictive Knowledge (PK) content will be published in accordance with strict editorial guidelines to be determined by an editorial oversight board and open to peer review prior to inclusion in the Large Knowledge Model.

The KLM Process described here is the result of teams of people working over decades, sometimes without a computer, and only a pen and paper, or a typewriter, and their thoughts to guide them. Many of them are no longer living, but their work goes on and their thoughts and dreams will never die.

Large Knowledge Model Phases Explained:

Large Knowledge Model – Phases 1, 2, 3 are well established, and we have used them widely and wisely to develop many print, CD-ROM, and web-based products.

Large Knowledge Model – Phases 4, 5 and 6 begin to explore the present AI and LLM challenges to utilize its potential in relationship to Human Intelligence.

LKM Process – Phase 1 – Capture Quality Bibliographic Data

The MARC 21 bibliographic data format is available from the Library of Congress-Bibliographic Division. MARC 21 records ensure consistency and accuracy in the use of controlled vocabulary and headings (authority control), which is essential for effective searching and retrieval of bibliographic data.

LKM Process – Phase 2 – Select, classify, and prioritize the best 250,000 books

Selecting the best 250,000 books by the most knowledgeable 100,000 authors assures the highest quality content for readers and researchers. By focusing on books written by knowledgeable authors, and selecting books that are well-researched, accurate, and have had a significant impact on their field, a curated selection of books can be created that meets the criteria for quality. Such a selection would provide readers with a high-quality resource that they can rely on for accurate and influential knowledge.

LKM Process – Phase 3 – Scan and Digitize the contents of the 250,000 The Best Books in The Library of Congress *

Digitizing books is necessary to control the content for Large Language Model processing, analysis, searchability, consistency, annotation, and sharing. These benefits can make it easier to analyze and interpret the content of books and can facilitate the editorial process in training the Large Language Model and collaboration among the editorial team.  The results is a digital library of the Best Books in the Library of Congress* BBLC - Digital Collections

LKM Process – Phase 4 – Large Language Model (LLM) to Large Knowledge Model

A LLM program will use the Best Books in the Library of Congress* BBLC - Digital Collections to build a Large Knowledge Model that is unmatched in its depth and breadth. By analyzing the text using NLP techniques and creating a Large Knowledge Model, the program provides insights, generates innovative ideas, and answers questions on a wide range of topics. The Best Books in the Library of Congress* BBLC - Digital Collections contains a vast amount of knowledge, and by harnessing the power of Large Language Model, we can unlock its full potential.

Learn & Explore

 

LKM Process – Phase 5 – Basic Search & Chat

Our search, questions and chat offers a unique hierarchical structure that allows for basic answers with bibliographic citations to books and their specific chapters.

LKM Process – Phase 6 – Advanced Search & Learning

Our advanced search & learning option offer a more structured approach for learning with bibliographic citations and guidance for a more Intelligent approach. Queries and answers are cited by book summaries, chapter summaries, chapter Knowledge Abstracts, and Knowledge Facts.

Disclaimer: The Large Knowledge Model has no affiliation with the Library of Congress, and we only use the Public Domain Library of Congress Classification System and the MARC (MAchine Readable Cataloging) available from the Bibliographic Access Division.

 

© American Data Processing, Inc.

Publisher of data resources for knowledge since 1960

*Disclaimer: The Large Knowledge Model has no affiliation with the Library of Congress, and we only use the Public Domain Library of Congress Classification System and the MARC (MAchine Readable Cataloging) available from the Bibliographic Access Division.

© American Data Processing, Inc.

 Publisher of data resources for knowledge since 1960

bottom of page