Search results for: distinguish
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 398

Search results for: distinguish

98 Characterization of Articular Cartilage Based on the Response of Cartilage Surface to Loading/Unloading

Authors: Z. Arabshahi, I. Afara, A. Oloyede, H. Moody, J. Kashani, T. Klein

Abstract:

Articular cartilage is a fluid-swollen tissue of synovial joints that functions by providing a lubricated surface for articulation and to facilitate the load transmission. The biomechanical function of this tissue is highly dependent on the integrity of its ultrastructural matrix. Any alteration of articular cartilage matrix, either by injury or degenerative conditions such as osteoarthritis (OA), compromises its functional behaviour. Therefore, the assessment of articular cartilage is important in early stages of degenerative process to prevent or reduce further joint damage with associated socio-economic impact. Therefore, there has been increasing research interest into the functional assessment of articular cartilage. This study developed a characterization parameter for articular cartilage assessment based on the response of cartilage surface to loading/unloading. This is because the response of articular cartilage to compressive loading is significantly depth-dependent, where the superficial zone and underlying matrix respond differently to deformation. In addition, the alteration of cartilage matrix in the early stages of degeneration is often characterized by PG loss in the superficial layer. In this study, it is hypothesized that the response of superficial layer is different in normal and proteoglycan depleted tissue. To establish the viability of this hypothesis, samples of visually intact and artificially proteoglycan-depleted bovine cartilage were subjected to compression at a constant rate to 30 percent strain using a ring-shaped indenter with an integrated ultrasound probe and then unloaded. The response of articular surface which was indirectly loaded was monitored using ultrasound during the time of loading/unloading (deformation/recovery). It was observed that the rate of cartilage surface response to loading/unloading was different for normal and PG-depleted cartilage samples. Principal Component Analysis was performed to identify the capability of the cartilage surface response to loading/unloading, to distinguish between normal and artificially degenerated cartilage samples. The classification analysis of this parameter showed an overlap between normal and degenerated samples during loading. While there was a clear distinction between normal and degenerated samples during unloading. This study showed that the cartilage surface response to loading/unloading has the potential to be used as a parameter for cartilage assessment.

Keywords: cartilage integrity parameter, cartilage deformation/recovery, cartilage functional assessment, ultrasound

Procedia PDF Downloads 169
97 Preliminary Study of Gold Nanostars/Enhanced Filter for Keratitis Microorganism Raman Fingerprint Analysis

Authors: Chi-Chang Lin, Jian-Rong Wu, Jiun-Yan Chiu

Abstract:

Myopia, ubiquitous symptom that is necessary to correct the eyesight by optical lens struggles many people for their daily life. Recent years, younger people raise interesting on using contact lens because of its convenience and aesthetics. In clinical, the risk of eye infections increases owing to the behavior of incorrectly using contact lens unsupervised cleaning which raising the infection risk of cornea, named ocular keratitis. In order to overcome the identification needs, new detection or analysis method with rapid and more accurate identification for clinical microorganism is importantly needed. In our study, we take advantage of Raman spectroscopy having unique fingerprint for different functional groups as the distinct and fast examination tool on microorganism. As we know, Raman scatting signals are normally too weak for the detection, especially in biological field. Here, we applied special SERS enhancement substrates to generate higher Raman signals. SERS filter we designed in this article that prepared by deposition of silver nanoparticles directly onto cellulose filter surface and suspension nanoparticles - gold nanostars (AuNSs) also be introduced together to achieve better enhancement for lower concentration analyte (i.e., various bacteria). Research targets also focusing on studying the shape effect of synthetic AuNSs, needle-like surface morphology may possible creates more hot-spot for getting higher SERS enhance ability. We utilized new designed SERS technology to distinguish the bacteria from ocular keratitis under strain level, and specific Raman and SERS fingerprint were grouped under pattern recognition process. We reported a new method combined different SERS substrates can be applied for clinical microorganism detection under strain level with simple, rapid preparation and low cost. Our presenting SERS technology not only shows the great potential for clinical bacteria detection but also can be used for environmental pollution and food safety analysis.

Keywords: bacteria, gold nanostars, Raman spectroscopy surface-enhanced Raman scattering filter

Procedia PDF Downloads 133
96 Effects of Post-sampling Conditions on Ethanol and Ethyl Glucuronide Formation in the Urine of Diabetes Patients

Authors: Hussam Ashwi, Magbool Oraiby, Ali Muyidi, Hamad Al-Oufi, Mohammed Al-Oufi, Adel Al-Juhani, Salman Al-Zemaa, Saeed Al-Shahrani, Amal Abuallah, Wedad Sherwani, Mohammed Alattas, Ibraheem Attafi

Abstract:

Ethanol must be accurately identified and quantified to establish their use and contribution in criminal cases and forensic medicine. In some situations, it may be necessary to reanalyze an old specimen; therefore, it is essential to comprehend the effect of storage conditions and how long the result of a reanalyzed specimen can be reliable and reproducible. Additionally, ethanol can be produced via multiple in vivo and in vitro processes, particularly in diabetic patients, and the results can be affected by storage conditions and time. In order to distinguish between in vivo and in vitro alcohol generation in diabetes patient urine samples, various factors should be considered. This study identifies and quantifies ethanol and EtG in diabetic patients' urine samples stored in two different settings over time. Ethanol levels were determined using gas chromatography-headspace (GC-HS), and ethyl glucuronide (EtG) levels were determined using the immunoassay (RANDOX) technique. Ten urine specimens were collected and placed in a standard container. Each specimen was separated into two containers. The specimens were divided into two groups: those kept at room temperature (25 °C) and those kept cold (2-8 °C). Ethanol and EtG levels were determined serially over a two-week period. Initial results showed that none of the specimens tested positive for ethanol or EtG. At room temperature (15-25 °C), 7 and 14 days after the sample was taken, the average concentration of ethanol increased from 1.7 mg/dL to 2 mg/dL, and the average concentration of EtG increased from 108 ng/mL to 186 ng/mL. At 2–8 °C, the average ethanol concentration was 0.4 and 0.5 mg/dL, and the average EtG concentration was 138 and 124 ng/mL seven and fourteen days after the sample was collected, respectively. When ethanol and EtG levels were determined 14 days post collection, they were considerably lower than when stored at room temperature. A considerable increase in EtG concentrations (14-day range 0–186 ng/mL) is produced during room-temperature storage, although negative initial results for all specimens. Because EtG might be produced after a sampling collection, it is not a reliable indicator of recent alcohol consumption. Given the possibility of misleading EtG results due to in vitro EtG production in the urine of diabetic patients.

Keywords: ethyl glucuronide, ethanol, forensic toxicology, diabetic

Procedia PDF Downloads 87
95 Surgical Imaging in Ancient Egypt

Authors: Mohamed Ahmed Madkour, Haitham Magdy Hamad

Abstract:

This research aims to study of the surgery science and imaging in ancient Egypt, and how to diagnose the surgical cases, whether due to injuries or disease that requires surgical intervention, Medical diagnosis and how to treat it. The ancient Egyptian physician tried to change over from magic and theological thinking to become a stand-alone experimental science, they were able to distinguish between diseases and they divide them into internal and external diseases even this division exists to date in modern medicine. There is no evidence to recognize the amount of human knowledge in the prehistoric knowledge of medicine and surgery except skeleton. It is not far from the human being in those times familiar with some means of treatment, Surgery in the Stone age was rudimentary, Flint stone was used after trimming in a certain way as a lancet to slit and open the skin. Wooden tree branches were used to make splints to treat bone fractures. Surgery developed further when copper was discovered, it led to the advancement of Egyptian civilization, then modern and advanced tools appeared in the operating theater like a knife or a scalpel. The climate and environmental conditions have preserved medical papyri and human remains that have confirmed their knowledge of surgical methods including sedation. The ancient Egyptians reached a great importance in surgery, evidenced by the scenes that depict the pathological image and the surgical process, but the image alone is not sufficient to prove the pathology, its presence in ancient Egypt and its treatment method. As there are a number of medical papyri, especially Edwin Smith and Ebris, which prove the ancient Egyptian surgeon's knowledge of the pathological condition that It requires a surgical intervention, otherwise its diagnosis and the method of treatment will not be described with such accuracy through these texts. Some surgeries are described in the department of surgery at Ebris papyrus. The level of surgery in ancient Egypt was high, and they performed surgery such as hernias and Aneurysm, however we have not received a lengthy explanation of the various surgeries and the surgeon has usually only said “treated surgically”. It is evident in the Ebris papyrus that they used sharp surgical tools and cautery in operations where bleeding is expected, such as hernias, arterial sacs and tumors.

Keywords: ancient Egypt, archaeology, Egyptian history, ancient asurgical imaging, Egyptian civilization, civilization

Procedia PDF Downloads 40
94 Surgical Imaging in Ancient Egypt

Authors: Ahmed Hefny Mohamed El-Badwy

Abstract:

This research aims to study of the surgery science and imaging in ancient Egypt, and how to diagnose the surgical cases, whether due to injuries or disease that requires surgical intervention, Medical diagnosis and how to treat it. The ancient Egyptian physician tried to change over from magic and theological thinking to become a stand-alone experimental science, they were able to distinguish between diseases, and they divide them into internal and external diseases even this division exists to date in modern medicine. There is no evidence to recognize the amount of human knowledge in the prehistoric knowledge of medicine and surgery except skeleton. It is not far from the human being in those times familiar with some means of treatment, Surgery in the Stone age was rudimentary, Flint stone was used after trimming in a certain way as a lancet to slit and open the skin. Wooden tree branches were used to make splints to treat bone fractures. Surgery developed further when copper was discovered, it led to the advancement of Egyptian civilization, then modern and advanced tools appeared in the operating theater, like a knife or a scalpel, there is evidence of surgery performed in ancient Egypt during the dynastic period (323 – 3200 BC). The climate and environmental conditions have preserved medical papyri and human remains that have confirmed their knowledge of surgical methods, including sedation. The ancient Egyptians reached a great importance in surgery, evidenced by the scenes that depict the pathological image and the surgical process, but the image alone is not sufficient to prove the pathology, its presence in ancient Egypt and its treatment method. As there are a number of medical papyri, especially Edwin Smith and Ebris, which prove the ancient Egyptian surgeon's knowledge of the pathological condition that It requires a surgical intervention, otherwise, its diagnosis and the method of treatment will not be described with such accuracy through these texts. Some surgeries are described in the department of surgery at Ebris papyrus (recipes from 863 to 877). The level of surgery in ancient Egypt was high, and they performed surgery such as hernias and Aneurysm, however, we have not received a lengthy explanation of the various surgeries, and the surgeon has usually only said “treated surgically”. It is evident in the Ebris papyrus that they used sharp surgical tools and cautery in operations where bleeding is expected, such as hernias, arterial sacs and tumors.

Keywords: ancientegypt, egypt, archaeology, the ancient egyptian

Procedia PDF Downloads 37
93 Surgical Imaging in Ancient Egypt

Authors: Haitham Nabil Zaghlol Hasan

Abstract:

This research aims to study of the surgery science and imaging in ancient Egypt and how to diagnose the surgical cases, whether due to injuries or disease that requires surgical intervention, Medical diagnosis and how to treat it. The ancient Egyptian physician tried to change over from magic and theological thinking to become a stand-alone experimental science, they were able to distinguish between diseases, and they divide them into internal and external diseases even though this division exists to date in modern medicine. There is no evidence to recognize the amount of human knowledge in the prehistoric knowledge of medicine and surgery except skeleton. It is not far from the human being in those times familiar with some means of treatment, Surgery in the Stone age was rudimentary, Flint stone was used after trimming in a certain way as a lancet to slit and open the skin. Wooden tree branches were used to make splints to treat bone fractures. Surgery developed further when copper was discovered, it led to the advancement of Egyptian civilization, then modern and advanced tools appeared in the operating theater, like a knife or a scalpel, there is evidence of surgery performed in ancient Egypt during the dynastic period (323 – 3200 BC). The climate and environmental conditions have preserved medical papyri and human remains that have confirmed their knowledge of surgical methods, including sedation. The ancient Egyptians reached great importance in surgery, evidenced by the scenes that depict the pathological image and the surgical process, but the image alone is not sufficient to prove the pathology, its presence in ancient Egypt and its treatment method. As there are a number of medical papyri, especially Edwin Smith and Ebris, which prove the ancient Egyptian surgeon's knowledge of the pathological condition that It requires surgical intervention, otherwise, its diagnosis and the method of treatment will not be described with such accuracy through these texts. Some surgeries are described in the department of surgery at Ebris papyrus (recipes from 863 to 877). The level of surgery in ancient Egypt was high, and they performed surgery such as hernias and Aneurysm, however, we have not received a lengthy explanation of the various surgeries, and the surgeon has usually only said: “treated surgically”. It is evident in the Ebris papyrus that they used sharp surgical tools and cautery in operations where bleeding is expected, such as hernias, arterial sacs and tumors.

Keywords: egypt, ancient_egypt, civilization, archaeology

Procedia PDF Downloads 37
92 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 292
91 Effects of Mild Heat Treatment on the Physical and Microbial Quality of Salak Apricot Cultivar

Authors: Bengi Hakguder Taze, Sevcan Unluturk

Abstract:

Şalak apricot (Prunus armeniaca L., cv. Şalak) is a specific variety grown in Igdir, Turkey. The fruit has distinctive properties distinguish it from other cultivars, such as its unique size, color, taste and higher water content. Drying is the widely used method for preservation of apricots. However, fresh consumption is preferred for Şalak apricot instead of drying due to its low dry matter content. Higher amounts of water in the structure and climacteric nature make the fruit sensitive against rapid quality loss during storage. Hence, alternative processing methods need to be introduced to extend the shelf life of the fresh produce. Mild heat (MH) treatment is of great interest as it can reduce the microbial load and inhibit enzymatic activities. Therefore, the aim of this study was to evaluate the impact of mild heat treatment on the natural microflora found on Şalak apricot surfaces and some physical quality parameters of the fruit, such as color and firmness. For this purpose, apricot samples were treated at different temperatures between 40 and 60 ℃ for different periods ranging between 10 to 60 min using a temperature controlled water bath. Natural flora on the fruit surfaces was examined using standard plating technique both before and after the treatment. Moreover, any changes in color and firmness of the fruit samples were also monitored. It was found that control samples were initially containing 7.5 ± 0.32 log CFU/g of total aerobic plate count (TAPC), 5.8±0.31 log CFU/g of yeast and mold count (YMC), and 5.17 ± 0.22 log CFU/g of coliforms. The highest log reductions in TAPC and YMC were observed as 3.87-log and 5.8-log after the treatments at 60 ℃ and 50 ℃, respectively. Nevertheless, the fruit lost its characteristic aroma at temperatures above 50 ℃. Furthermore, great color changes (ΔE ˃ 6) were observed and firmness of the apricot samples was reduced at these conditions. On the other hand, MH treatment at 41 ℃ for 10 min resulted in 1.6-log and 0.91-log reductions in TAPC and YMC, respectively, with slightly noticeable changes in color (ΔE ˂ 3). In conclusion, application of temperatures higher than 50 ℃ caused undesirable changes in physical quality of Şalak apricots. Although higher microbial reductions were achieved at those temperatures, temperatures between 40 and 50°C should be further investigated considering the fruit quality parameters. Another strategy may be the use of high temperatures for short time periods not exceeding 1-5 min. Besides all, MH treatment with UV-C light irradiation can be also considered as a hurdle strategy for better inactivation results.

Keywords: color, firmness, mild heat, natural flora, physical quality, şalak apricot

Procedia PDF Downloads 107
90 Mitigation of Cascading Power Outage Caused Power Swing Disturbance Using Real-time DLR Applications

Authors: Dejenie Birile Gemeda, Wilhelm Stork

Abstract:

The power system is one of the most important systems in modern society. The existing power system is approaching the critical operating limits as views of several power system operators. With the increase of load demand, high capacity and long transmission networks are widely used to meet the requirement. With the integration of renewable energies such as wind and solar, the uncertainty, intermittence bring bigger challenges to the operation of power systems. These dynamic uncertainties in the power system lead to power disturbances. The disturbances in a heavily stressed power system cause distance relays to mal-operation or false alarms during post fault power oscillations. This unintended operation of these relays may propagate and trigger cascaded trappings leading to total power system blackout. This is due to relays inability to take an appropriate tripping decision based on ensuing power swing. According to the N-1 criterion, electric power systems are generally designed to withstand a single failure without causing the violation of any operating limit. As a result, some overloaded components such as overhead transmission lines can still work for several hours under overload conditions. However, when a large power swing happens in the power system, the settings of the distance relay of zone 3 may trip the transmission line with a short time delay, and they will be acting so quickly that the system operator has no time to respond and stop the cascading. Misfiring of relays in absence of fault due to power swing may have a significant loss in economic performance, thus a loss in revenue for power companies. This research paper proposes a method to distinguish stable power swing from unstable using dynamic line rating (DLR) in response to power swing or disturbances. As opposed to static line rating (SLR), dynamic line rating support effective mitigation actions against propagating cascading outages in a power grid. Effective utilization of existing transmission lines capacity using machine learning DLR predictions will improve the operating point of distance relay protection, thus reducing unintended power outages due to power swing.

Keywords: blackout, cascading outages, dynamic line rating, power swing, overhead transmission lines

Procedia PDF Downloads 105
89 A Mixed Thought Pattern and the Question of Justification: A Feminist Project

Authors: Angana Chatterjee

Abstract:

The feminist scholars point out the various problematic issues in the traditional mainstream western thought and theories. The thought practices behind the discriminatory and oppressive social practices are based on concepts that play a pivotal role in theorisation. Therefore, many feminist philosophers take up reformation or reconceptualisation projects. Such projects have bearings on various aspects of philosophical thought, namely, ontology, epistemology, logic, ethics, social, political thought, and so on. In tune with this spirit, the present paper suggests a well-established thought pattern which is not western but has got the potential to deal with the problems of mainstream western thought culture that are identified by the feminist critics. The Indian thought pattern is theorised in the domain of Indian logic, which is a study of inference patterns. As, in the Indian context, the inference is considered as a source of knowledge, certain epistemological questions are linked with the discussion of inference. One of the key epistemological issues is one regarding justification. The study about the nature of derivation of knowledge from available evidence, and the nature of the evidence itself, are integral parts of the discipline called Indian logic. But if we contrast the western tradition of thought with the Indian one, we can find that the Indian logic has got some peculiar features which may be shown to deal with the problems identified by the feminist scholars in western thought culture more plausibly. The tradition of western logic, starting from Aristotle, has been maintaining sharp differences between two forms of reasoning, namely, deductive and inductive. These two different forms of reasoning have been theorised and dealt with separately within the domain of the study called ‘logic.’ There are various philosophical problems that are raised around concepts and issues regarding both deductive and inductive reasoning. Indian logic does not distinguish between deduction and induction as thought patterns, but their distinction is very usual to make in the western tradition. Though there can be found various interpretations about this peculiarity of Indian thought pattern, these mixed patterns were actually very close to the cross-cultural pattern in which human beings would tend to argue or infer from the available data or evidence. The feminist theories can successfully operate in the domain of lived experience if they make use of such a mixed pattern of reasoning or inference. By offering sound inferential knowledge on contextual evidences, the Indian thought pattern is potent to serve the feminist purposes in a meaningful way.

Keywords: feminist thought, Indian logic, inference, justification, mixed thought pattern

Procedia PDF Downloads 67
88 Roboweeder: A Robotic Weeds Killer Using Electromagnetic Waves

Authors: Yahoel Van Essen, Gordon Ho, Brett Russell, Hans-Georg Worms, Xiao Lin Long, Edward David Cooper, Avner Bachar

Abstract:

Weeds reduce farm and forest productivity, invade crops, smother pastures and some can harm livestock. Farmers need to spend a significant amount of money to control weeds by means of biological, chemical, cultural, and physical methods. To solve the global agricultural labor shortage and remove poisonous chemicals, a fully autonomous, eco-friendly, and sustainable weeding technology is developed. This takes the form of a weeding robot, ‘Roboweeder’. Roboweeder includes a four-wheel-drive self-driving vehicle, a 4-DOF robotic arm which is mounted on top of the vehicle, an electromagnetic wave generator (magnetron) which is mounted on the “wrist” of the robotic arm, 48V battery packs, and a control/communication system. Cameras are mounted on the front and two sides of the vehicle. Using image processing and recognition, distinguish types of weeds are detected before being eliminated. The electromagnetic wave technology is applied to heat the individual weeds and clusters dielectrically causing them to wilt and die. The 4-DOF robotic arm was modeled mathematically based on its structure/mechanics, each joint’s load, brushless DC motor and worm gear’ characteristics, forward kinematics, and inverse kinematics. The Proportional-Integral-Differential control algorithm is used to control the robotic arm’s motion to ensure the waveguide aperture pointing to the detected weeds. GPS and machine vision are used to traverse the farm and avoid obstacles without the need of supervision. A Roboweeder prototype has been built. Multiple test trials show that Roboweeder is able to detect, point, and kill the pre-defined weeds successfully although further improvements are needed, such as reducing the “weeds killing” time and developing a new waveguide with a smaller waveguide aperture to avoid killing crops surrounded. This technology changes the tedious, time consuming and expensive weeding processes, and allows farmers to grow more, go organic, and eliminate operational headaches. A patent of this technology is pending.

Keywords: autonomous navigation, machine vision, precision heating, sustainable and eco-friendly

Procedia PDF Downloads 191
87 Implementing Effective Mathematical-Discussion Programme for Mathematical Competences in Primary School Classroom in South Korea

Authors: Saeyoung Lee

Abstract:

As the enthusiasm for education in Korea is too much high, it is well known by others that children in Korea get good scores in Mathematics. However, behind of this good reputation, children in Korea are easy to get lose self-confidence, tend to complaint and rarely participate in the class because of too much competition which leads to lack of competences. In this regard, the main goals of this paper are, by applying the programme based on peer-communication on Mathematics education field, it would like to improve self-managemental competence to make children gain self-confidence, communicative competence to make them deal with complaint and communitive competence to make them participated in the class for the age of 10 children to solve this problem. 14 children the age of 10 in one primary school in Gangnam, Seoul, Korea had participated in the research from March 2018 to October 2018. They were under the programme based on peer-communication during the period. Every Mathematics class maintained the same way. Firstly a problem was given to children. Secondly, children were asked to find many ways to solve the problem as much as they could by themselves. Thirdly all ways to solve the problem by children were posted on the board and three of the children made a group to distinguish the ways from valid to invalid. Lastly, all children made a discuss to find one way which is the most efficient among valid ways. Pre-test was carried out by the questionnaire based on Likert scale before applying the programme. The result of the pre-test was 3.89 for self-managemental competence, 3.91 for communicative competence and 4.19 for communitive competence. Post-test was carried out by the same questionnaire after applying the programme. The result of the post-test was 3.93 for self-managemental competence, 4.23 for communicative competence and 4.20 for communitive competence. That means by applying the programme based on peer-communication on Mathematics education field, the age of 10 children in Korea could improve self-managemental, communicative and communitive competence. Especially it works very well on communicative competence by increasing 0.32 points as it marked. Considering this research, Korean Mathematics education based on competition which leads to lack of competences should be changed to cooperative structure to make students more competent rather than just getting good scores. In conclusion, innovative teaching methods which are focused on improving competences such as the programme based on peer-communication which was applied in this research are strongly required to be studied and widely used.

Keywords: competences, mathematics education, peer-communication, primary education

Procedia PDF Downloads 111
86 A Modest Proposal for Deep-Sixing Propositions in the Philosophy of Language

Authors: Patrick Duffley

Abstract:

Hanks (2021) identifies three Frege-inspired commitments concerning propositions that are widely shared across the philosophy of language: (1) propositions are the primary, inherent bearers of representational properties and truth-conditions; (2) propositions are neutral representations possessing a ‘content’ that is devoid of ‘force; (3) propositions can be entertained or expressed without being asserted. Hanks then argues that the postulate of neutral content must be abandoned, and the primary bearers of truth-evaluable representation must be identified as the token acts of assertoric predication that people perform when they are thinking or speaking about the world. Propositions are ‘types of acts of predication, which derive their representational features from their tokens.’ Their role is that of ‘classificatory devices that we use for the purposes of identifying and individuating mental states and speech acts,’ so that ‘to say that Russell believes that Mont Blanc is over 4000 meters high is to classify Russell’s mental state under a certain type, and thereby distinguish that mental state from others that Russell might possess.’ It is argued in this paper that there is no need to classify an utterance of 'Russell believes that Mont Blanc is over 4000 meters high' as a token of some higher-order utterance-type in order to identify what Russell believes; the meanings of the words themselves and the syntactico-semantic relations between them are sufficient. In our view what Hanks has accomplished in effect is to build a convincing argument for dispensing with propositions completely in the philosophy of language. By divesting propositions of the role of being the primary bearers of representational properties and truth-conditions and fittingly transferring this role to the token acts of predication that people perform when they are thinking or speaking about the world, he has situated truth in its proper place and obviated any need for abstractions like propositions to explain how language can express things that are true. This leaves propositions with the extremely modest role of classifying mental states and speech acts for the purposes of identifying and individuating them. It is demonstrated here however that there is no need whatsoever to posit such abstract entities to explain how people identify and individuate such states/acts. We therefore make the modest proposal that the term ‘proposition’ be stricken from the vocabulary of philosophers of language.

Keywords: propositions, truth-conditions, predication, Frege, truth-bearers

Procedia PDF Downloads 26
85 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 275
84 Differences in Guilt, Shame, Self-Anger, and Suicide Cognitions Based on Recent Suicide Ideation and Lifetime Suicide Attempt History

Authors: E. H. Szeto, E. Ammendola, J. V. Tabares, A. Starkey, J. Hay, J. G. McClung, C. J. Bryan

Abstract:

Introduction: Suicide is a leading cause of death globally, which accounts for more deaths annually than war, acquired immunodeficiency syndrome, homicides, and car accidents, while an estimated 140 million individuals have significant suicide ideation (SI) each year in the United States. Typical risk factors such as hopelessness, depression, and psychiatric disorders can predict suicide ideation but cannot distinguish between those who ideate from those who attempt suicide (SA). The Fluid Vulnerability Theory of suicide posits that a person’s activation of the suicidal mode is predicated on one’s predisposition, triggers, baseline/acute risk, and protective factors. The current study compares self-conscious cognitive-affective states (including guilt, shame, anger towards the self, and suicidal beliefs) among patients based on the endorsement of recent SI (i.e., past two weeks; acute risk) and lifetime SA (i.e., baseline risk). Method: A total of 2,722 individuals in an outpatient primary care setting were included in this cross-sectional, observational study; data for 2,584 were valid and retained for analysis. The Differential Emotions Scale measuring guilt, shame, and self-anger and the Suicide Cognitions Scale measuring suicide cognitions were administered. Results: A total of 2,222 individuals reported no recent SI or lifetime SA (Group 1), 161 reported recent SI only (Group 2), 145 reported lifetime SA only (Group 3), 56 reported both recent SI and lifetime SA (Group 4). The Kruskal-Wallis test showed that guilt, shame, self-anger, and suicide cognitions were the highest for Group 4 (both recent SI and lifetime SA), followed by Group 2 (recent SI-only), then Group 3 (lifetime SA-only), and lastly, Group 1 (no recent SI or lifetime SA). Conclusion: The results on recent SI-only versus lifetime SA-only contribute to the literature on the Fluid Vulnerability Theory of suicide by capturing SI and SA in two different time periods, which signify the acute risks and chronic baseline risks of the suicidal mode, respectively. It is also shown that: (a) people with a lifetime SA reported more severe symptoms than those without, (b) people with recent SI reported more severe symptoms than those without, and (c) people with both recent SI and lifetime SA were the most severely distressed. Future studies may replicate the findings here with other pertinent risk factors such as thwarted belongingness, perceived burdensomeness, and acquired capability, the last of which is consistently linked to attempting among ideators.

Keywords: suicide, guilt, shame, self-anger, suicide cognitions, suicide ideation, suicide attempt

Procedia PDF Downloads 130
83 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 211
82 Identification of Igneous Intrusions in South Zallah Trough-Sirt Basin

Authors: Mohamed A. Saleem

Abstract:

Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, twelve igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and …S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in N-S profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, were deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.

Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data

Procedia PDF Downloads 82
81 Teaching Kindness as Moral Virtue in Preschool Children: The Effectiveness of Picture-Storybook Reading and Hand-Puppet Storytelling

Authors: Rose Mini Agoes Salim, Shahnaz Safitri

Abstract:

The aim of this study is to test the effectiveness of teaching kindness in preschool children by using several techniques. Kindness is a physical act or emotional support aimed to build or maintain relationships with others. Kindness is known to be essential in the development of moral reasoning to distinguish between the good and bad things. In this study, kindness is operationalized as several acts including helping friends, comforting sad friends, inviting friends to play, protecting others, sharing, saying hello, saying thank you, encouraging others, and apologizing. It is mentioned that kindness is crucial to be developed in preschool children because this is the time the children begin to interact with their social environment through play. Furthermore, preschool children's cognitive development makes them begin to represent the world with words, which then allows them to interact with others. On the other hand, preschool children egocentric thinking makes them still need to learn to consider another person's perspective. In relation to social interaction, preschool children need to be stimulated and assisted by adult to be able to pay attention to other and act with kindness toward them. On teaching kindness to children, the quality of interaction between children and their significant others is the key factor. It is known that preschool children learn about kindness by imitating adults on their two way interaction. Specifically, this study examines two types of teaching techniques that can be done by parents as a way to teach kindness, namely the picture-storybook reading and hand-puppet storytelling. These techniques were examined because both activities are easy to do and both also provide a model of behavior for the child based on the character in the story. To specifically examine those techniques effectiveness in teaching kindness, two studies were conducted. Study I involves 31 children aged 5-6 years old with picture-storybook reading technique, where the intervention is done by reading 8 picture books for 8 days. In study II, hand-puppet storytelling technique is examined to 32 children aged 3-5 years old. The treatments effectiveness are measured using an instrument in the form of nine colored cards that describe the behavior of kindness. Data analysis using Wilcoxon Signed-rank test shows a significant difference on the average score of kindness (p < 0.05) before and after the intervention has been held. For daily observation, a ‘kindness tree’ and observation sheets are used which are filled out by the teacher. Two weeks after interventions, an improvement on all kindness behaviors measured is intact. The same result is also gained from both ‘kindness tree’ and observational sheets.

Keywords: kindness, moral teaching, storytelling, hand puppet

Procedia PDF Downloads 220
80 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19

Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello

Abstract:

Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.

Keywords: facial action coding system, COVID-19, masks, facial analysis

Procedia PDF Downloads 40
79 The Incidence of Postoperative Atrial Fibrillation after Coronary Artery Bypass Grafting in Patients with Local and Diffuse Coronary Artery Disease

Authors: Kamil Ganaev, Elina Vlasova, Andrei Shiryaev, Renat Akchurin

Abstract:

De novo atrial fibrillation (AF) after coronary artery bypass grafting (CABG) is a common complication. To date, there are no data on the possible effect of diffuse lesions of coronary arteries on the incidence of postoperative AF complications. Methods. Patients operated on-pump under hypothermic conditions during the calendar year (2020) were studied. Inclusion criteria - isolated CABG and achievement of complete myocardial revascularization. Patients with a history of AF moderate and severe valve dysfunction, hormonal thyroid pathology, initial CHF(Congestive heart failure), as well as patients with developed perioperative complications (IM, acute heart failure, massive blood loss) and deceased were excluded. Thus 227 patients were included; mean age 65±9 years; 69% were men. 89% of patients had a 3-vessel lesion of the coronary artery; the remainder had a 2-vessel lesion. Mean LV size: 3.9±0.3 cm, indexed LV volume: 29.4±5.3 mL/m2. Two groups were considered: D (n=98), patients with diffuse coronary heart disease, and L (n=129), patients with local coronary heart disease. Clinical and demographic characteristics in the groups were comparable. Rhythm assessment: continuous bedside ECG monitoring up to 5 days; ECG CT at 5-7 days after CABG; daily routine ECG registration. Follow-up period - postoperative hospital period. Results. The Median follow-up period was 9 (7;11) days. POFP (Postoperative atrial fibrillation) was detected in 61/227 (27%) patients: 34/98 (35%) in group D versus 27/129 (21%) in group L; p<0.05. Moreover, the values of revascularization index in groups D and L (3.9±0.7 and 3.8±0.5, respectively) were equal, and the mean time Cardiopulmonary bypass (CPB) (107±27 and 80±13min), as well as the mean ischemic time (67±17 and 55±11min) were significantly longer in group D (p<0.05). However, a separate analysis of these parameters in patients with and without developed AF did not reveal any significant differences in group D (CPB time 99±21.2 min, ischemic time 63±12.2 min), or in group L (CPB time 88±13.1 min, ischemic time 58.7±13.2 min). Conclusion. With the diffuse nature of coronary lesions, the incidence of AF in the hospital period after isolated CABG definitely increases. To better understand the role of severe coronary atherosclerosis in the development of POAF, it is necessary to distinguish the influence of organic features of atrial and ventricular myocardium (as a consequence of chronic coronary disease) from the features of surgical correction in diffuse coronary lesions.

Keywords: atrial fibrillation, diffuse coronary artery disease, coronary artery bypass grafting, local coronary artery disease

Procedia PDF Downloads 182
78 Learning Gains and Constraints Resulting from Haptic Sensory Feedback among Preschoolers' Engagement during Science Experimentation

Authors: Marios Papaevripidou, Yvoni Pavlou, Zacharias Zacharia

Abstract:

Embodied cognition and additional (touch) sensory channel theories indicate that physical manipulation is crucial to learning since it provides, among others, touch sensory input, which is needed for constructing knowledge. Given these theories, the use of Physical Manipulatives (PM) becomes a prerequisite for learning. On the other hand, empirical research on Virtual Manipulatives (VM) (e.g., simulations) learning has provided evidence showing that the use of PM, and thus haptic sensory input, is not always a prerequisite for learning. In order to investigate which means of experimentation, PM or VM, are required for enhancing student science learning at the kindergarten level, an empirical study was conducted that sought to investigate the impact of haptic feedback on the conceptual understanding of pre-school students (n=44, age mean=5,7) in three science domains: beam balance (D1), sinking/floating (D2) and springs (D3). The participants were equally divided in two groups according to the type of manipulatives used (PM: presence of haptic feedback, VM: absence of haptic feedback) during a semi-structured interview for each of the domains. All interviews followed the Predict-Observe-Explain (POE) strategy and consisted of three phases: initial evaluation, experimentation, final evaluation. The data collected through the interviews were analyzed qualitatively (open-coding for identifying students’ ideas in each domain) and quantitatively (use of non-parametric tests). Findings revealed that the haptic feedback enabled students to distinguish heavier to lighter objects when held in hands during experimentation. In D1 the haptic feedback did not differentiate PM and VM students' conceptual understanding of the function of the beam as a mean to compare the mass of objects. In D2 the haptic feedback appeared to have a negative impact on PM students’ learning. Feeling the weight of an object strengthen PM students’ misconception that heavier objects always sink, whereas the scientifically correct idea that the material of an object determines its sinking/floating behavior in the water was found to be significantly higher among the VM students than the PM ones. In D3 the PM students outperformed significantly the VM students with regard to the idea that the heavier an object is the more the spring will expand, indicating that the haptic input experienced by the PM students served as an advantage to their learning. These findings point to the fact that PMs, and thus touch sensory input, might not always be a requirement for science learning and that VMs could be considered, under certain circumstances, as a viable means for experimentation.

Keywords: haptic feedback, physical and virtual manipulatives, pre-school science learning, science experimentation

Procedia PDF Downloads 104
77 Dinoflagellate Thecal Plates as a Green Cellulose Source

Authors: Alvin Chun Man Kwok, Wai Sun Chan, Wei Yuan, Joseph Tin Yum Wong

Abstract:

Cellulose, the most abundant biopolymer, is the major constituent of plant and dinoflagellate cell walls. Thecate dinoflagellates, in particular, are renowned for their remarkable capacity to synthesize intricate cellulosic thecal plates (CTPs). Unlike the extracellular two-dimensional structure of plant cell walls, these CTPs are three-dimensional and reside within the cellular structure itself. The deposition of CTPs occurs with remarkable precision, and their arrangement serves as crucial taxonomic markers. It is noteworthy that these plates possess the hardness of wood, despite the absence of lignin. Partial and prolonged hydrolysis of CTPs results in the formation of uniform long bundles and lowdimensional, modular crystalline whiskers. This observation aligns with the consistent nanomechanical properties, suggesting a CTPboard structure. The unique composition and structural characteristics of CTPs distinguish them from other cellulose-based materials in the natural world. Spectroscopic studies using Raman and FTIR methods indicate a clear low crystallinity index, with the OH shift becoming more distinct following SDS treatment. Birefringence imaging confirms the highly organized structure of CTPs, demonstrating varying degrees of anisotropy in different regions, including both seaward and cytosolic passages. The knockdown of a cellulose synthase enzyme in dinoflagellates resulted in severe malformation of CTPs and hindered the life-cycle transition. Unlike certain other microalgal groups, these unique circum-spherical depositions of CTPs were not pre-fabricated and transported "to site," but synthesized within alveolar sacs at the specific site. Our research is particularly focused on unraveling the mechanisms underlying the biodeposition of CTPs and exploring their potential biotechnological applications. Understanding the processes involved in CTP formation can pave the way for harnessing their unique properties for various practical applications. Dinoflagellates play a crucial role as major agents of algal blooms and are also known for producing anti-greenhouse sulfur compounds such as DMS/DMSP, highlighting the significance of CTPs as a carbon-neutral source of cellulose. Grant acknowledgement: Research in the laboratory are supported by GRF16104523 from Research Grant Council to JTYW.

Keywords: cellulosic thecal plates, dinoflagellates, cellulose, cell wall

Procedia PDF Downloads 40
76 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 304
75 Intersubjectivity of Forensic Handwriting Analysis

Authors: Marta Nawrocka

Abstract:

In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.

Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods

Procedia PDF Downloads 123
74 Decision-Making in the Internationalization Process of Small and Medium Sized Companies: Experience from Managers in a Small Economy

Authors: Gunnar Oskarsson, Gudjon Helgi Egilsson

Abstract:

Due to globalization, small and medium-sized enterprises (SME) increasingly offer their products and services in foreign markets. The main reasons are either to compensate for a decreased market share in their home market or to exploit opportunities in foreign markets, which are becoming less distant and better accessible than before. International markets are particularly important for companies located in a small economy and offering specialized products. Although more accessible, entering international markets is both expensive and difficult. In order to select the most appropriate markets, it is, therefore, important to gain an insight into the factors that have an impact on success, or potential failure. Although there has been a reasonable volume of research into the theory of internationalization, there is still a need to gain further understanding of the decision-making process of SMEs in small economies and the most important characteristics that distinguish between success and failure. The main objective of this research is to enhance knowledge on the internationalization of SMEs, including the drivers for the decision to internationalize, and the most important factors contributing to success in their internationalization activities. A qualitative approach was found to be most appropriate for this kind of research, with the objective of gaining a deeper understanding and discovering factors which impact a company’s decision-making and potential success. In-depth interviews were conducted with 14 companies in different industries located in Iceland, a country extensively dependent on export revenues. The interviews revealed several factors as drivers of internationalization and, not surprisingly, the most frequently mentioned source of motivation was that the local market is inadequate to maintain a sustainable operation. Good networking relationships were seen as a particular priority for potential success, searching for new markets was mainly carried out through the internet, although sales exhibitions and sales trips were also considered important. When it comes to the final decision as to whether a market should be considered for further analysis, economy, labor cost, legal environment, and cultural barriers were the most common factors to be weighted. The ultimate answer to successful internationalization, however, is largely dependent on a coordinated and experienced management team. The main contribution of this research is offering an insight into factors affecting decision-making in the internationalization process of SMEs, based on the opinion and experience of managers of SMEs in a small economy.

Keywords: internationalization, success factors, small and medium-sized enterprises (SMEs), drivers, decision making

Procedia PDF Downloads 209
73 Nondestructive Electrochemical Testing Method for Prestressed Concrete Structures

Authors: Tomoko Fukuyama, Osamu Senbu

Abstract:

Prestressed concrete is used a lot in infrastructures such as roads or bridges. However, poor grout filling and PC steel corrosion are currently major issues of prestressed concrete structures. One of the problems with nondestructive corrosion detection of PC steel is a plastic pipe which covers PC steel. The insulative property of pipe makes a nondestructive diagnosis difficult; therefore a practical technology to detect these defects is necessary for the maintenance of infrastructures. The goal of the research is a development of an electrochemical technique which enables to detect internal defects from the surface of prestressed concrete nondestructively. Ideally, the measurements should be conducted from the surface of structural members to diagnose non-destructively. In the present experiment, a prestressed concrete member is simplified as a layered specimen to simulate a current path between an input and an output electrode on a member surface. The specimens which are layered by mortar and the prestressed concrete constitution materials (steel, polyethylene, stainless steel, or galvanized steel plates) were provided to the alternating current impedance measurement. The magnitude of an applied electric field was 0.01-volt or 1-volt, and the frequency range was from 106 Hz to 10-2 Hz. The frequency spectrums of impedance, which relate to charge reactions activated by an electric field, were measured to clarify the effects of the material configurations or the properties. In the civil engineering field, the Nyquist diagram is popular to analyze impedance and it is a good way to grasp electric relaxation using a shape of the plot. However, it is slightly not suitable to figure out an influence of a measurement frequency which is reciprocal of reaction time. Hence, Bode diagram is also applied to describe charge reactions in the present paper. From the experiment results, the alternating current impedance method looks to be applicable to the insulative material measurement and eventually prestressed concrete diagnosis. At the same time, the frequency spectrums of impedance show the difference of the material configuration. This is because the charge mobility reflects the variety of substances and also the measuring frequency of the electric field determines migration length of charges which are under the influence of the electric field. However, it could not distinguish the differences of the material thickness and is inferred the difficulties of prestressed concrete diagnosis to identify the amount of an air void or a layer of corrosion product by the technique.

Keywords: capacitance, conductance, prestressed concrete, susceptance

Procedia PDF Downloads 380
72 Sequential Mixed Methods Study to Examine the Potentiality of Blackboard-Based Collaborative Writing as a Solution Tool for Saudi Undergraduate EFL Students’ Writing Difficulties

Authors: Norah Alosayl

Abstract:

English is considered the most important foreign language in the Kingdom of Saudi Arabia (KSA) because of the usefulness of English as a global language compared to Arabic. As students’ desire to improve their English language skills has grown, English writing has been identified as the most difficult problem for Saudi students in their language learning. Although the English language in Saudi Arabia is taught beginning in the seventh grade, many students have problems at the university level, especially in writing, due to a gap between what is taught in secondary and high schools and university expectations- pupils generally study English at school, based on one book with few exercises in vocabulary and grammar exercises, and there are no specific writing lessons. Moreover, from personal teaching experience at King Saud bin Abdulaziz University, students face real problems with their writing. This paper revolves around the blackboard-based collaborative writing to help the undergraduate Saudi EFL students, in their first year enrolled in two sections of ENGL 101 in the first semester of 2021 at King Saud bin Abdulaziz University, practice the most difficult skill they found in their writing through a small group. Therefore, a sequential mixed methods design will be suited. The first phase of the study aims to highlight the most difficult skill experienced by students from an official writing exam that is evaluated by their teachers through an official rubric used in King Saud bin Abdulaziz University. In the second phase, this study will intend to investigate the benefits of social interaction on the process of learning writing. Students will be provided with five collaborative writing tasks via discussion feature on Blackboard to practice a skill that they found difficult in writing. the tasks will be formed based on social constructivist theory and pedagogic frameworks. The interaction will take place between peers and their teachers. The frequencies of students’ participation and the quality of their interaction will be observed through manual counting, screenshotting. This will help the researcher understand how students actively work on the task through the amount of their participation and will also distinguish the type of interaction (on task, about task, or off-task). Semi-structured interviews will be conducted with students to understand their perceptions about the blackboard-based collaborative writing tasks, and questionnaires will be distributed to identify students’ attitudes with the tasks.

Keywords: writing difficulties, blackboard-based collaborative writing, process of learning writing, interaction, participations

Procedia PDF Downloads 162
71 Inf-γ and Il-2 Asses the Therapeutic Response in Anti-tuberculosis Patients at Jamot Hospital Yaounde, Cameroon

Authors: Alexandra Emmanuelle Membangbi, Jacky Njiki Bikoï, Esther Del-florence Moni Ndedi, Marie Joseph Nkodo Mindimi, Donatien Serge Mbaga, Elsa Nguiffo Makue, André Chris Mikangue Mbongue, Martha Mesembe, George Ikomey Mondinde, Eric Walter Perfura-yone, Sara Honorine Riwom Essama

Abstract:

Background: Tuberculosis (TB) is one of the top lethal infectious diseases worldwide. In recent years, interferon-γ (INF-γ) release assays (IGRAs) have been established as routine tests for diagnosing TB infection. However, produced INF-γ assessment failed to distinguish active TB (ATB) from latent TB infection (LTBI), especially in TB epidemic areas. In addition to IFN-γ, interleukin-2 (IL-2), another cytokine secreted by activated T cells, is also involved in immune response against Mycobacterium tuberculosis. The aim of the study was to assess the capacity of IFN-γ and IL2 to evaluate the therapeutic response of patients on anti-tuberculosis treatment. Material and Methods: We conducted a cross-sectional study in the Pneumonology Departments of the Jamot Hospital in Yaoundé between May and August 2021. After signed the informed consent, the sociodemographic data, as well as 5 mL of blood, were collected in the crook of the elbow of each participant. Sixty-one subjects were selected (n= 61) and divided into 4 groups as followed: group 1: resistant tuberculosis (n=13), group 2: active tuberculosis (n=19), group 3 cured tuberculosis (n=16), and group 4: presumed healthy persons (n=13). The cytokines of interest were determined using an indirect Enzyme-linked Immuno-Sorbent Assay (ELISA) according to the manufacturer's recommendations. P-values < 0.05 were interpreted as statistically significant. All statistical calculations were performed using SPSS version 22.0 Results: The results showed that men were more 14/61 infected (31,8%) with a high presence in active and resistant TB groups. The mean age was 41.3±13.1 years with a 95% CI = [38.2-44.7], the age group with the highest infection rate was ranged between 31 and 40 years. The IL-2 and INF-γ means were respectively 327.6±160.6 pg/mL and 26.6±13.0 pg/mL in active tuberculosis patients, 251.1±30.9 pg/mL and 21.4±9.2 pg/mL in patients with resistant tuberculosis, while it was 149.3±93.3 pg/mL and 17.9±9.4 pg/mL in cured patients, 15.1±8.4 pg/mL and 5.3±2.6 pg/mL in participants presumed healthy (p <0.0001). Significant differences in IFN-γ and IL-2 rates were observed between the different groups. Conclusion: Monitoring the serum levels of INF-γ and IL-2 would be useful to evaluate the therapeutic response of anti-tuberculosis patients, particularly in the both cytokines association case, that could improve the accuracy of routine examinations.

Keywords: antibiotic therapy, interferon gamma, interleukin 2, tuberculosis

Procedia PDF Downloads 70
70 Refractory Cardiac Arrest: Do We Go beyond, Do We Increase the Organ Donation Pool or Both?

Authors: Ortega Ivan, De La Plaza Edurne

Abstract:

Background: Spain and other European countries have implemented Uncontrolled Donation after Cardiac Death (uDCD) programs. After 15 years of experience in Spain, many things have changed. Recent evidence and technical breakthroughs achieved in resuscitation are relevant for uDCD programs and raise some ethical concerns related to these protocols. Aim: To rethink current uDCD programs in the light of recent evidence on available therapeutic procedures applicable to victims of out-of-hospital cardiac arrest (OHCA). To address the following question: What is the current standard of treatment owed to victims of OHCA before including them in an uDCD protocol? Materials and Methods: Review of the scientific and ethical literature related to both uDCD programs and innovative resuscitation techniques. Results: 1) The standard of treatment received and the chances of survival of victims of OHCA depend on whether they are classified as Non-Heart Beating Patients (NHBP) or Non-Heart-Beating-Donors (NHBD). 2) Recent studies suggest that NHBPs are likely to survive, with good quality of life, if one or more of the following interventions are performed while ongoing CPR -guided by suspected or known cause of OHCA- is maintained: a) direct access to a Cath Lab-H24 or/and to extra-corporeal life support (ECLS); b) transfer in induced hypothermia from the Emergency Medical Service (EMS) to the ICU; c) thrombolysis treatment; d) mobile extra-corporeal membrane oxygenation (mini ECMO) instituted as a bridge to ICU ECLS devices. 3) Victims of OHCA who cannot benefit from any of these therapies should be considered as NHBDs. Conclusion: Current uDCD protocols do not take into account recent improvements in resuscitation and need to be adapted. Operational criteria to distinguish NHBDs from NHBP should seek a balance between the technical imperative (to do whatever is possible), considerations about expected survival with quality of life, and distributive justice (costs/benefits). Uncontrolled DCD protocols can be performed in a way that does not hamper the legitimate interests of patients, potential organ donors, their families, the organ recipients, and the health professionals involved in these processes. Families of NHBDs’ should receive information which conforms to the ethical principles of respect of autonomy and transparency.

Keywords: uncontrolled donation after cardiac death resuscitation, refractory cardiac arrest, out of hospital cardiac, arrest ethics

Procedia PDF Downloads 206
69 Dividend Policy in Family Controlling Firms from a Governance Perspective: Empirical Evidence in Thailand

Authors: Tanapond S.

Abstract:

Typically, most of the controlling firms are relate to family firms which are widespread and important for economic growth particularly in Asian Pacific region. The unique characteristics of the controlling families tend to play an important role in determining the corporate policies such as dividend policy. Given the complexity of the family business phenomenon, the empirical evidence has been unclear on how the families behind business groups influence dividend policy in Asian markets with the prevalent existence of cross-shareholdings and pyramidal structure. Dividend policy as one of an important determinant of firm value could also be implemented in order to examine the effect of the controlling families behind business groups on strategic decisions-making in terms of a governance perspective and agency problems. The purpose of this paper is to investigate the impact of ownership structure and concentration which are influential internal corporate governance mechanisms in family firms on dividend decision-making. Using panel data and constructing a unique dataset of family ownership and control through hand-collecting information from the nonfinancial companies listed in Stock Exchange of Thailand (SET) between 2000 and 2015, the study finds that family firms with large stakes distribute higher dividends than family firms with small stakes. Family ownership can mitigate the agency problems and the expropriation of minority investors in family firms. To provide insight into the distinguish between ownership rights and control rights, this study examines specific firm characteristics including the degrees of concentration of controlling shareholders by classifying family ownership in different categories. The results show that controlling families with large deviation between voting rights and cash flow rights have more power and affect lower dividend payment. These situations become worse when second blockholders are families. To the best knowledge of the researcher, this study is the first to examine the association between family firms’ characteristics and dividend policy from the corporate governance perspectives in Thailand with weak investor protection environment and high ownership concentration. This research also underscores the importance of family control especially in a context in which family business groups and pyramidal structure are prevalent. As a result, academics and policy makers can develop markets and corporate policies to eliminate agency problem.

Keywords: agency theory, dividend policy, family control, Thailand

Procedia PDF Downloads 245