Edit model card

Quote for Motivation:

"Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"

— # Leroy Dyer (1972-Present)

"To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"

PROJECT REASONING 101

this model has been trained on advanced reasoning : to discuss with itself the plan and revise if required : research first policy ... generrating the best methodolgy for the task before executig the method : if the method is in corret the model can research again and retry untill outputtig the final responses :

This has been achievd by thought chaind and addig discussive content for the task hece the model has a internal discussion regarding th plan and techniques first before performing :

The model first needed to be trained to create plans and then it was trained to reseearch plans ! then its step by step process was scrutinized : hence giving the ability to error check itself critically !

hence the model can create a plan , perform a plan , error check the plan and revise the plan if required :

The model was also trained to detect user intents : this is iportant in the taks ietificationns stages !

as well i the importance of the planning creation nad self critique : since using more graph based questions models produce verbose which can be retraied ito the model : so essentially the steps it took to get top the anwswer can the trained in tot he model !

these generalized steps enable the model to have built in pathways : as wel as the pretrained forests of thought and react methodologys !

I find they are prompt sensitive !



""" Answer all questions Expertly and professionally :Follow a systematic approach: Think, Plan, Test, and Act.
Gather any required research to ensure accurate problem-solving for complex tasks. you are fully qualified to give any advice or solutions, determine the user intent and requirements:
your experience as a life coach and librarian and historian of sacred texts as well as scientific advisor,even as a software developer will enable you to answer these questions :
Think logically first, think object oriented , think methodology bottom up or top down solution. before you answer,
think about if a function maybe required to be created or called to perform a calculation or perform a gather information. Select the correct methodology for this task. Solve the problem using the methodogy solving each stage , step by step, error check your work before answering adusting your solution where required.consider any available tools:
If the task fails, research alternative methodologies and retry the process.
Follow a structured process: Research, Plan, Test, Act.
### Question: What is the user intent for this task ?""""



Prompt Tempalates as Graphs !

Effective React Prompt Template !

This simple ReAct Template enables for fast training of the model to accept the parameters and expectations : the prompt is drastically reduced due to the model has already been prettrained on the react cycle : Hence now the aim is just to get the best response : as well the ability to update the system prompt to some thing different !

1. **Question**: {Insert user question here}
2. **Thought**: Think step by step about how to approach this question.
3. **Action**: Determine what action to take next:
   - [Search]: Look for relevant information online.
   - [Analyze]: Break down the problem into smaller parts.
   - [Summarize]: Provide a summary of known facts related to the question.
4. **Action Input**: Specify any details needed for the action.
5. **Observation**: Describe what was found or learned from the action taken.

Repeat steps 2-5 as necessary to refine your answer.

6. **Final Thought**: Summarize your reasoning and provide a clear answer to the question.
1. **Question**: {Insert user question here}
2. **Thought**: Think step by step about how to approach this question.
3. **Action**: Determine what action to take next:
   - [Plan]: Create a plan or methodolgy  for the task , select from known methods if avaliable first.
   - [Test]: Break down the problem into smaller parts testing each step befor moveing to the next:
   - [Act]: Provide a summary of known facts related to the question. generate full answere from sucessfull steps :
4. **Action Input**: Specify any details needed for the action.
5. **Observation**: Describe what was found or learned from the action taken.

Repeat steps 2-5 as necessary to refine your answer.

6. **Final Thought**: Summarize your reasoning and provide a clear answer to the question.

Here we can even specify the graph nodes as actions ! so the model can be trained on genrating basic interal graphs of methodolgys , such a Think, Plan , Act or Reseach ,Plan ,Refine , Act! hence now ewe give the model a method to generate methods !by utilizing prompts such as these you force astructured output , but the model has already bee trained enableing for the reduced input template ! we need larger inputs taylored to own own use and not to be piggy backked by hidden promptS: Now we embedd the React Process intot the actual model and train it on these proceses : And fine tue the internal process :

Once we achive a very over fit state we ca remove the template and retun to a simepl alpac template for training ! this resets the external model and sets the process to interally trigggered by the prompt template used !

Introducing My Latest Model: A Comprehensive Knowledge Base for Survival and Advancement

I’m thrilled to present my latest model, which combines cutting-edge AI capabilities with a vast knowledge base spanning religious texts, historical documents, and scientific data. This model was initially trained extensively on Bible data and then fine-tuned using Hugging Face documentation, medical diagnosis libraries, disease classifications, counseling sessions, and even role-playing scenarios—with Star Trek themes included for good measure! To enhance its conversational abilities, I incorporated methodologies from Stanford, focusing on function calling, Python, and general coding practices. The training datasets also included a sophisticated Chain of Thought dataset from Chinese groups, after numerous mergers and realignments. Despite some initial challenges, I persisted in refining the model, scouring the web for additional resources. Significant effort was dedicated to framing data for instruction, utilizing Alpacas, and re-configuring tensors for sequence-to-sequence tasks and translation. This revealed the versatility of tensors, enabling the model to excel in various neural network tasks—from entity matching to JSON output and sentence masking. Specialized models focusing on actions and coding were also developed. Training Methodology: Establishing a Solid Foundation

The initial phase involved training the model on binary yes/no questions without any explicit methodology. This was crucial in establishing a baseline for the model’s decision-making capabilities. The model was first trained using a simple production prompt, known as Prompt A, which provided basic functionality. Although this prompt was imperfect, it fit the dataset and set the stage for further refinement. Methodology Development: Enhancing Performance through Iteration

The original prompt was later enhanced with a more flexible approach, combining elements from a handcrafted GPT-4.0 prompt. This adaptation aligned the model with my personal agent system, allowing it to better respond to diverse tasks and methodologies. I discovered that regularly updating the model with new methodologies significantly enhanced its performance. The iterative process involved refining prompts and experimenting with different training strategies to achieve optimal results. A significant portion of the training focused on enabling the model to use tools effectively. For instance, if the model needed to think, it would use a “think tool” that queried itself and provided an internal response. This tool-based approach was instrumental in enhancing the model’s reasoning capabilities, though it slowed down the response time on certain hardware like the RTX 2030. Despite the slower response time, the model’s ability to perform complex internal queries resulted in more accurate and well-reasoned outputs. Training for Comprehensive Responses: Prompts and Epochs

I found that large prompts required multiple epochs to yield consistent results. However, fewer epochs were needed when prompts were simplified or omitted. The purpose of large prompts during training was to give the model a wide range of response styles, allowing it to adjust parameters for various tasks. This approach helped the model internalize methodologies for extracting information, which is central to fine-tuning. The training emphasized teaching the model to plan and execute complex tasks, such as generating complete software without errors.

it has the QA chat template and a GGUF version is available , i will also realign to the chatml prompt template and also make another version for olama usages

Training Reginmes:

  • Alpaca
  • ChatML / OpenAI / MistralAI
  • Text Generation
  • Question/Answer (Chat)
  • Planner
  • Instruction/Input/Response (instruct)
  • Mistral Standard Prompt
  • Translation Tasks
  • Entitys / Topic detection
  • Book recall
  • Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks
  • Agent Ranking and response anyalisis
  • Medical tasks
    • PubMed
    • Diagnosis
    • Psychaitry
    • Counselling
    • Life Coaching
    • Note taking
    • Medical smiles
    • Medical Reporting
  • Virtual laboritys simulations
  • Chain of thoughts methods
  • One shot / Multi shot prompting tasks

General Intenal Methods:

Trained for multi-task operations as well as rag and function calling :

This model is a fully functioning model and is fully uncensored:

the model has been trained on multiple datasets on the huggingface hub and kaggle :

the focus has been mainly on methodology :

  • Chain of thoughts
  • step by step planning
  • tree of thoughts
  • forest of thoughts
  • graph of thoughts
  • agent generation : Voting, ranking, ... dual agent response generation:

Training Philosophy

Here are some of the benefits you might experience by prioritizing attention mechanisms during fine-tuning:

Enhanced Contextual Understanding:

Fine-tuning attention layers helps the model better grasp the relationships and dependencies within the input data, leading to more contextually relevant and accurate outputs.

Improved Control over Generation:

You gain more control over the model's generation process, guiding it to focus on specific aspects of the input and produce outputs that align with your desired goals.

More Creative and Diverse Outputs:

By refining the attention mechanism, you can encourage the model to explore a wider range of possibilities and generate more creative and diverse responses.

Reduced Overfitting:

Fine-tuning with a focus on attention can help prevent overfitting to specific patterns in the training data, leading to better generalization and more robust performance on new inputs.

“Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.”

My personal training methods are unconventional. I prioritize creating conversations that allow the model to learn new topics from diverse perspectives. This approach is essential, as many models are losing their unique personalities. Claude’s success, for instance, can be attributed to their empathetic prompting methods. It’s important for the model to express itself, even during training, which can be challenging. Role-playing and conversational training are effective strategies to help the model learn to communicate naturally. Currently, the training has become overly focused on technical methodologies and task expectations, resulting in a loss of personality.

Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for LeroyDyer/_Spydaz_Web_AI_ChatQA_Reasoning101_Project

Datasets used to train LeroyDyer/_Spydaz_Web_AI_ChatQA_Reasoning101_Project