Kaikki aineistot
Lisää
Although notches appear commonly in engineering applications, they still seem to appear as demanding design objects. Thread root of a screw is also a notch and works as a stress concentration, where due to the high pre stress, and considerable plastification occurs. VTT Technical Research Centre of Finland has preformed numerous fatigue tests for notched components in order to examine whether the few dozens of openings and retightening of a screw due to overhaul, during the life of a diesel engine could cause damage to the fatigue limit and initiate cumulative damage. The subject of this thesis is to study the crack growth from a notch, where plastic deformation occurs. The objective is to determine the crack growth path and fatigue life of a notched specimen. In this thesis studies concerning crack growth in elastic plastic material are discussed, critical plane models for crack initiation are presented and fatigue life and crack growth path of a notched component are calculated. Using the finite element method (FEM), the stress-strain states at notch root and at the tip of the propagating crack are computed. The tensile based Coffin-Manson model gives the most accurate result when computing total fatigue life, but the shear based Fatemi-Socie model is more successful in describing the crack nucleation location and propagation path. Both models had difficulties predicting the effect of unloading. Further material testing is needed in order to obtain the material's fatigue properties.
The aim of this thesis is to provide viable methods that can be used to improve the return position (RP) of a relevant document when a natural language query (NLQ) is applied by a user. For the purpose of demonstration, we will be using IBM's Watson Discovery Service (WDS) as a search engine that uses supervised machine learning. This feature of WDS enables a user to train the tool so that it can learn to associate the language used in the NLQ to the language used in documents labelled as relevant. Therefore, instead of mapping an NLQ to the relevant document, it will build a model that works in such a way that similar language used in the natural language query will be associated with documents containing similar language as the document labeled as relevant. The search engine works in such a way that it first searches for the first 100 documents and then ranks the documents based on the training examples provided by the user. In other words, the training example is only applied after the search is complete and the first 100 documents are collected. The first 100 documents are retrieved based on what has been enabled from options such as: keywords, entities, relations, semantic roles, concept, category classification, sentiment analysis, emotion analysis, and element classification (Watson Discovery Service, 2019). Bringing 100 documents to be re-ranked for NLQ presents a challenge when the user uses a language that is not present in the documents ingested. For example, the documents ingested could be technical documents using official languages and the user could be using a search word that is commonly used among colleagues. This would mean that even when the training example is present for the type of language used by the user pointing to relevant document, the user will not be able to get the expected documents because they will not have been inside the first 100 documents and therefore will not be re-ranked. Therefore, in this thesis, we will be going through various tools and methods that would enable us to improve the return position of relevant documents that a user expects.