I love asking questions, and follow-up questions. Clarifications, nuances, hypotheticals, and so on. There are very few people I met who can address all my questions, and link them together in a large framework of understanding. Since the age of large language models is upon us, and much research and development is being done by ways of open source tooling, we have pretty amazing technologies to play with.
The project I have in mind is to build a system that would ingest all the research and information I throw into it. It would be able to manage and organize its memory in a way that it can make associations similar to how humans can “connect the dots” between distant pieces of information. It would be given “absolute knowns” as starting points for all its logical deductions (such as the fact that Jesus Christ is God incarnate) and would from there generate webs of conclusions and associations. It would be able to figure out things like the falsehood of papism and ecumenism. It would be able to make inferences and “educated guesses” (which it would clearly mark) and then test those theories against known facts; kind of like solving a maze or sudoku puzzle… you take a guess and see where it takes you, hit a contradiction and backtrack.
The end result would be a graphical game-like app with a web-like map of ideas and connections. Similar to how wikipedia has links all over the place to referenced pages, except it would remain faithful to the truth which it has been taught. From there, you’d be able to talk to it and it would give you references and show you all the primary source documents to answer your question. For example, it will show you all the canons that obligate an Orthodox Christian to break communion with heretics and give you countless examples from the lives of the saints throughout history. It would be a treasure chest of knowledge.
Perhaps the Lord will bless this undertaking. Right now it’s an idea. I already run LLMs locally via LMStudio and know about Mem0, which can be run locally as well. The next steps would be to decide on a good, uncensored, base model to fine-tine and put all these backend pieces together.
I love asking questions, and follow-up questions. Clarifications, nuances, hypotheticals, and so on. There are very few people I met who can address all my questions, and link them together in a large framework of understanding. Since the age of large language models is upon us, and much research and development is being done by ways of open source tooling, we have pretty amazing technologies to play with.
The project I have in mind is to build a system that would ingest all the research and information I throw into it. It would be able to manage and organize its memory in a way that it can make associations similar to how humans can “connect the dots” between distant pieces of information. It would be given “absolute knowns” as starting points for all its logical deductions (such as the fact that Jesus Christ is God incarnate) and would from there generate webs of conclusions and associations. It would be able to figure out things like the falsehood of papism and ecumenism. It would be able to make inferences and “educated guesses” (which it would clearly mark) and then test those theories against known facts; kind of like solving a maze or sudoku puzzle… you take a guess and see where it takes you, hit a contradiction and backtrack.
The end result would be a graphical game-like app with a web-like map of ideas and connections. Similar to how wikipedia has links all over the place to referenced pages, except it would remain faithful to the truth which it has been taught. From there, you’d be able to talk to it and it would give you references and show you all the primary source documents to answer your question. For example, it will show you all the canons that obligate an Orthodox Christian to break communion with heretics and give you countless examples from the lives of the saints throughout history. It would be a treasure chest of knowledge.
Perhaps the Lord will bless this undertaking. Right now it’s an idea. I already run LLMs locally via LMStudio and know about Mem0, which can be run locally as well. The next steps would be to decide on a good, uncensored, base model to fine-tine and put all these backend pieces together.