Search results for: universal testing machine
631 Jarcho-Levin Syndrome: A Case Report
Authors: Atitallah Sofien, Bouyahia Olfa, Romdhani Meriam, Missaoui Nada, Ben Rabeh Rania, Yahyaoui Salem, Mazigh Sonia, Boukthir Samir
Abstract:
Introduction: Spondylothoracic dysostosis, also known as Jarcho-Levin syndrome, is defined by a shortened neck and thorax, a protruding abdomen, inguinal and umbilical hernias, atypical spinal structure and rib fusion, leading to restricted chest movement or difficulty in breathing, along with urinary tract abnormalities and, potentially severe scoliosis. Aim: This is the case of a patient diagnosed with Jarcho-Levin syndrome, aiming to detail the range of abnormalities observed in this syndrome, the observed complications, and the therapeutic approaches employed. Results: A three-month-old male infant, born of a consanguineous marriage, delivered at full term by cesarean section, was admitted to the pediatric department for severe acute bronchiolitis. In his prenatal history, morphological ultrasound revealed macrosomia, a shortened spine, irregular vertebrae with thickened skin, normal fetal cardiac ultrasound, and the absence of the right kidney. His perinatal history included respiratory distress, requiring ventilatory support for five days. Upon physical examination, he had stunted growth, scoliosis, a short neck and trunk, longer upper limbs compared to lower limbs, varus equinus in the right foot, a neural tube defect, a low hairline, and low-set ears. Spondylothoracic dysostosis was suspected, leading to further investigations, including a normal transfontaneous ultrasound, a spinal cord ultrasound revealing a lipomyelocele-type closed dysraphism with a low-attached cord, an abdominal ultrasound indicating a single left kidney, and a cardiac ultrasound identifying Kommerell syndrome. Due to a lack of resources, genetic testing could not be performed, and the diagnosis was based on clinical criteria. Conclusion: Jarcho-Levin syndrome can result in a mortality rate of about 50%, primarily due to respiratory complications associated with thoracic insufficiency syndrome. Other complications, like heart and neural tube defects, can also lead to premature mortality. Therefore, early diagnosis and comprehensive treatment involving various specialists are essential.Keywords: Jarcho-Levin syndrome, congenital disorder, scoliosis, spondylothoracic dysostosis, neural tube defect
Procedia PDF Downloads 58630 Production of Bio-Composites from Cocoa Pod Husk for Use in Packaging Materials
Authors: L. Kanoksak, N. Sukanya, L. Napatsorn, T. Siriporn
Abstract:
A growing population and demand for packaging are driving up the usage of natural resources as raw materials in the pulp and paper industry. Long-term effects of environmental is disrupting people's way of life all across the planet. Finding pulp sources to replace wood pulp is therefore necessary. To produce wood pulp, various other potential plants or plant parts can be employed as substitute raw materials. For example, pulp and paper were made from agricultural residue that mainly included pulp can be used in place of wood. In this study, cocoa pod husks were an agricultural residue of the cocoa and chocolate industries. To develop composite materials to replace wood pulp in packaging materials. The paper was coated with polybutylene adipate-co-terephthalate (PBAT). By selecting and cleaning fresh cocoa pod husks, the size was reduced. And the cocoa pod husks were dried. The morphology and elemental composition of cocoa pod husks were studied. To evaluate the mechanical and physical properties, dried cocoa husks were extracted using the soda-pulping process. After selecting the best formulations, paper with a PBAT bioplastic coating was produced on a paper-forming machine Physical and mechanical properties were studied. By using the Field Emission Scanning Electron Microscope/Energy Dispersive X-Ray Spectrometer (FESEM/EDS) technique, the structure of dried cocoa pod husks showed the main components of cocoa pod husks. The appearance of porous has not been found. The fibers were firmly bound for use as a raw material for pulp manufacturing. Dry cocoa pod husks contain the major elements carbon (C) and oxygen (O). Magnesium (Mg), potassium (K), and calcium (Ca) were minor elements that were found in very small levels. After that cocoa pod husks were removed from the soda-pulping process. It found that the SAQ5 formula produced pulp yield, moisture content, and water drainage. To achieve the basis weight by TAPPI T205 sp-02 standard, cocoa pod husk pulp and modified starch were mixed. The paper was coated with bioplastic PBAT. It was produced using bioplastic resin from the blown film extrusion technique. It showed the contact angle, dispersion component and polar component. It is an effective hydrophobic material for rigid packaging applications.Keywords: cocoa pod husks, agricultural residue, composite material, rigid packaging
Procedia PDF Downloads 77629 An Analysis of Gamification in the Post-Secondary Classroom
Authors: F. Saccucci
Abstract:
Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary
Procedia PDF Downloads 212628 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 126627 One Year Follow up of Head and Neck Paragangliomas: A Single Center Experience
Authors: Cecilia Moreira, Rita Paiva, Daniela Macedo, Leonor Ribeiro, Isabel Fernandes, Luis Costa
Abstract:
Background: Head and neck paragangliomas are a rare group of tumors with a large spectrum of clinical manifestations. The approach to evaluate and treat these lesions has evolved over the last years. Surgery was the standard for the approach of these patients, but nowadays new techniques of imaging and radiation therapy changed that paradigm. Despite advances in treating, the growth potential and clinical outcome of individual cases remain largely unpredictable. Objectives: Characterization of our institutional experience with clinical management of these tumors. Methods: This was a cross-sectional study of patients followed in our institution between 01 January and 31 December 2017 with paragangliomas of the head and neck and cranial base. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic workup, treatment modality, tumor control and recurrence, complications of treatment and hereditary status were collected and summarized. Results: A total of four female patients were followed between 01 January and 31 December 2017 in our institution. The mean age of our cohort was 53 (± 16.1) years. The primary locations were at the level of the tympanic jug (n=2, 50%) and carotid body (n=2, 50%), and only one of the tumors of the carotid body presented pulmonary metastasis at the time of diagnosis. None of the lesions were catecholamine-secreting. Two patients underwent genetic testing, with no mutations identified. The initial clinical presentation was variable highlighting the decrease of visual acuity and headache as symptoms present in all patients. In one of the cases, loss of all teeth of the lower jaw was the presenting symptomatology. Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were employed as treatment approaches according to anatomical location and resectability of lesions. As post-therapeutic sequels the persistence of tinnitus and disabling pain stands out, presenting one of the patients neuralgia of the glossopharyngeal. Currently, all patients are under regular surveillance with a median follow up of 10 months. Conclusion: Ultimately, clinical management of these tumors remains challenging owing to heterogeneity in clinical presentation, the existence of multiple treatment alternatives, and potential to cause serious detriment to critical functions and consequently interference with the quality of life of the patients.Keywords: clinical outcomes, head and neck, management, paragangliomas
Procedia PDF Downloads 145626 Testing a Motivational Model of Physical Education on Contextual Outcomes and Total Moderate to Vigorous Physical Activity of Middle School Students
Authors: Arto Grasten
Abstract:
Given the rising trend in obesity in children and youth, age-related decline in moderate- to- vigorous-intensity physical activity (MVPA) in several Western, African, and Asian countries in addition to limited evidence of behavioral, affective, cognitive outcomes in physical education, it is important to clarify the motivational processes in physical education classes behind total MVPA engagement. The present study examined the full sequence of the Hierarchical Model of Motivation in physical education including motivational climate, basic psychological needs, intrinsic motivation, contextual behavior, affect, cognition, total MVPA, and associated links to body mass index (BMI) and gender differences. A cross-sectional data comprised self-reports and objective assessments of 770 middle school students (Mage = 13.99 ± .81 years, 52% of girls) in North-East Finland. In order to test the associations between motivational climate, psychological needs, intrinsic motivation, cognition, behavior, affect, and total MVPA, a path model was implemented. Indirect effects between motivational climate and cognition, behavior, affect and total MVPA were tested by setting basic needs and intrinsic motivation as mediators into the model. The findings showed that direct and indirect paths for girls and boys associated with different contextual outcomes and girls’ indirect paths were not related with total MVPA. Precisely, task-involving climate-mediated by physical competence and intrinsic motivation related to enjoyment, importance, and graded assessments within girls, whereas task-involving climate associated with enjoyment and importance via competence and autonomy, and total MVPA via autonomy, intrinsic motivation, and importance within boys. Physical education assessments appeared to be essential in motivating students to participate in greater total MVPA. BMI was negatively linked with competence and relatedness only among girls. Although, the current and previous empirical findings supported task-involving teaching methods in physical education, in some cases, ego-involving climate should not be totally avoided. This may indicate that girls and boys perceive physical education classes in a different way. Therefore, both task- and ego-involving teaching practices can be useful ways of driving behavior in physical education classes.Keywords: achievement goal theory, assessment, enjoyment, hierarchical model of motivation, physical activity, self-determination theory
Procedia PDF Downloads 281625 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data
Authors: M. Kharrat, G. Moreau, Z. Aboura
Abstract:
The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition
Procedia PDF Downloads 156624 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions
Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri
Abstract:
Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics
Procedia PDF Downloads 187623 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 103622 Virtual Metrology for Copper Clad Laminate Manufacturing
Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho
Abstract:
In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology
Procedia PDF Downloads 351621 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique
Authors: Pavana Basavakumar, Devadas Bhat
Abstract:
Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes
Procedia PDF Downloads 326620 Design of Agricultural Machinery Factory Facility Layout
Authors: Nilda Tri Putri, Muhammad Taufik
Abstract:
Tools and agricultural machinery (Alsintan) is a tool used in agribusiness activities. Alsintan used to change the traditional farming systems generally use manual equipment into modern agriculture with mechanization. CV Nugraha Chakti Consultant make an action plan for industrial development Alsintan West Sumatra in 2012 to develop medium industries of Alsintan become a major industry of Alsintan, one of efforts made is increase the production capacity of the industry Alsintan. Production capacity for superior products as hydrotiller and threshers set each for 2.000 units per year. CV Citra Dragon as one of the medium industry alsintan in West Sumatra has a plan to relocate the existing plant to meet growing consumer demand each year. Increased production capacity and plant relocation plan has led to a change in the layout; therefore need to design the layout of the plant facility CV Citra Dragon. First step the to design of plant layout is design the layout of the production floor. The design of the production floor layout is done by applying group technology layout. The initial step is to do a machine grouping and part family using the Average Linkage Clustering (ALC) and Rank Order Clustering (ROC). Furthermore done independent work station design and layout design using the Modified Spanning Tree (MST). Alternative selection layout is done to select the best production floor layout between ALC and ROC cell grouping. Furthermore, to design the layout of warehouses, offices and other production support facilities. Activity Relationship Chart methods used to organize the placement of factory facilities has been designed. After structuring plan facilities, calculated cost manufacturing facility plant establishment. Type of layout is used on the production floor layout technology group. The production floor is composed of four cell machinery, assembly area and painting area. The total distance of the displacement of material in a single production amounted to 1120.16 m which means need 18,7minutes of transportation time for one time production. Alsintan Factory has designed a circular flow pattern with 11 facilities. The facilities were designed consisting of 10 rooms and 1 parking space. The measure of factory building is 84 m x 52 m.Keywords: Average Linkage Clustering (ALC), Rank Order Clustering (ROC), Modified Spanning Tree (MST), Activity Relationship Chart (ARC)
Procedia PDF Downloads 497619 Chemical Pollution of Water: Waste Water, Sewage Water, and Pollutant Water
Authors: Nabiyeva Jamala
Abstract:
We divide water into drinking, mineral, industrial, technical and thermal-energetic types according to its use and purpose. Drinking water must comply with sanitary requirements and norms according to organoleptic devices and physical and chemical properties. Mineral water - must comply with the norms due to some components having therapeutic properties. Industrial water must fulfill its normative requirements by being used in the industrial field. Technical water should be suitable for use in the field of agriculture, household, and irrigation, and the normative requirements should be met. Heat-energy water is used in the national economy, and it consists of thermal and energy water. Water is a filter-accumulator of all types of pollutants entering the environment. This is explained by the fact that it has the property of dissolving compounds of mineral and gaseous water and regular water circulation. Environmentally clean, pure, non-toxic water is vital for the normal life activity of humans, animals and other living beings. Chemical pollutants enter water basins mainly with wastewater from non-ferrous and ferrous metallurgy, oil, gas, chemical, stone, coal, pulp and paper and forest materials processing industries and make them unusable. Wastewater from the chemical, electric power, woodworking and machine-building industries plays a huge role in the pollution of water sources. Chlorine compounds, phenols, and chloride-containing substances have a strong lethal-toxic effect on organisms when mixed with water. Heavy metals - lead, cadmium, mercury, nickel, copper, selenium, chromium, tin, etc. water mixed with ingredients cause poisoning in humans, animals and other living beings. Thus, the mixing of selenium with water causes liver diseases in people, the mixing of mercury with the nervous system, and the mixing of cadmium with kidney diseases. Pollution of the World's ocean waters and other water basins with oil and oil products is one of the most dangerous environmental problems facing humanity today. So, mixing even the smallest amount of oil and its products in drinking water gives it a bad, unpleasant smell. Mixing one ton of oil with water creates a special layer that covers the water surface in an area of 2.6 km2. As a result, the flood of light, photosynthesis and oxygen supply of water is getting weak and there is a great danger to the lives of living beings.Keywords: chemical pollutants, wastewater, SSAM, polyacrylamide
Procedia PDF Downloads 73618 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 133617 Corporate Social Responsibility Practices of Local Large Firms in the Developing Economies: The Case of the East Africa Region
Authors: Lilian Kishimbo
Abstract:
This study aims to examine Corporate Social Responsibility (CSR) practices of local large firms of East Africa region. In this study CSR is defined as all actions that go beyond obeying minimum legal requirements as espoused by other authors. Despite the increase of CSR literature empirical evidence clearly demonstrate an imbalance of CSR studies in the developing countries . Moreover, it is evident that most of the research on CSR in developing economies emerges from large fast-growing economies or BRICS members (i.e. Brazil, India, China and South Africa), and Indonesia and Malaysia and a further call for more research in Africa is particularly advocated. Taking Africa as an example, there are scanty researches on CSR practices, and the few available studies are mainly from Nigeria and South Africa leaving other parts of Africa for example East Africa underrepresented. Furthermore, in the face of globalization, experience shows that literature has focused mostly on multinational companies (MNCs) operating in either North-North or North-South and less on South-South indigenous local firms. Thus the existing literature in Africa shows more studies of MNCs and little is known about CSR of local indigenous firms operating in the South particularly in the East Africa region. Accordingly, this paper explores CSR practices of indigenous local large firms of East Africa region particularly Kenya and Tanzania with the aim of testing the hypothesis that do local firms of East Africa region engage in similar CSR practices as firms in other parts of the world?. To answer this question only listed local large firms were considered based on the assumption that they are large enough to engage. Newspapers were the main source of data and information collected was supplemented by business Annual Reports for the period 2010-2012. The research finding revealed that local firms of East Africa engage in CSR practices. However, there are some differences in the set of activities these firms prefers to engage in compared to findings from previous studies. As such some CSR that were given priority by firms in East Africa were less prioritized in the other part of the world including Indonesia. This paper will add knowledge to the body of CSR and experience of CSR practices of South-South indigenous firms where is evidenced to have a relative dearth of literature on CSR. Finally, the paper concludes that local firms of East Africa region engage in similar activities like other firms globally. But firms give more priority to some activities such education and health related activities. Finally, the study intends to assist policy makers at firm’s levels to plan for long lasting projects related to CSR for their stakeholders.Keywords: Africa, corporate social responsibility, developing countries, indigenous firms, Kenya, Tanzania
Procedia PDF Downloads 420616 Psychological Variables Predicting Academic Achievement in Argentinian Students: Scales Development and Recent Findings
Authors: Fernandez liporace, Mercedes Uriel Fabiana
Abstract:
Academic achievement in high school and college students is currently a matter of concern. National and international assessments show high schoolers as low achievers, and local statistics indicate alarming dropout percentages in this educational level. Even so, 80% of those students intend attending higher education. On the other hand, applications to Public National Universities are free and non-selective by examination procedures. Though initial registrations are massive (307.894 students), only 50% of freshmen pass their first year classes, and 23% achieves a degree. Low performances use to be a common problem. Hence, freshmen adaptation, their adjustment, dropout and low academic achievement arise as topics of agenda. Besides, the hinge between high school and college must be examined in depth, in order to get an integrated and successful path from one educational stratum to the other. Psychology aims at developing two main research lines to analyse the situation. One regarding psychometric scales, designing and/or adapting tests, examining their technical properties and their theoretical validity (e.g., academic motivation, learning strategies, learning styles, coping, perceived social support, parenting styles and parental consistency, paradoxical personality as correlated to creative skills, psychopathological symptomatology). The second research line emphasizes relationships within the variables measured by the former scales, facing the formulation and testing of predictive models of academic achievement, establishing differences by sex, age, educational level (high school vs college), and career. Pursuing these goals, several studies were carried out in recent years, reporting findings and producing assessment technology useful to detect students academically at risk as well as good achievers. Multiple samples were analysed totalizing more than 3500 participants (2500 from college and 1000 from high school), including descriptive, correlational, group differences and explicative designs. A brief on the most relevant results is presented. Providing information to design specific interventions according to every learner’s features and his/her educational environment comes up as a mid-term accomplishment. Furthermore, that information might be helpful to adapt curricula by career, as well as for implementing special didactic strategies differentiated by sex and personal characteristics.Keywords: academic achievement, higher education, high school, psychological assessment
Procedia PDF Downloads 370615 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 268614 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking
Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel
Abstract:
Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.Keywords: autism, schizotypy, creativity, cognitive control
Procedia PDF Downloads 137613 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong
Abstract:
This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 240612 Dual Carriage of Hepatitis B Surface and Envelope Antigen in Adults in the Poorest Region of Nigeria: 2000-2015
Authors: E. Isaac, I. Jalo, Y. Alkali, A. Ajani, A. Rasaki, Y. Jibrin, K. Mustapha, A. Ayuba, S. Charanchi, H. Danlami
Abstract:
Introduction: Hepatitis B infection continues to be a serious global health problem with about 2 billion people infected worldwide, many of these in sub-Saharan Africa. Nigeria is one of the countries with the highest incidence, with a prevalence of 10-15%. Methods: Records of Hepatitis B surface and envelope antigen test results in adults in Federal Teaching Hospital, Gombe between May 2000 and May 2015 were retrieved and analyzed. Findings: Adult out-patient consultations and in-patient admissions were 343,083 and 67,761 respectively, accounting for 87% of total. Hepatitis B surface antigenaemia was tested for in 23,888 adults and children. 88.9% (21240) were adults. Males constituted 56% (11902/21240) and females 44% (9211/21240). 5104 (24.0%) of tested individuals were 19-25years; 12,039 (56.7%) 26-45years; 21119 (9.0%) 46-55years; 2.8% (590/21240) and 766 (3.6%) >65years. Among adult males, 17% (2133/11902) was contributed by ages 19-25. 58% (7017/11902), 11.9% (1421/11902), 6.4% (765/11902) and 4.7% (563/11902) of males were 26-45 years old, 46-55 years old and 56-65 years and >65year old respectively. Adults aged 19-25years, 26-45 years, 46-55years, 56-65 and > 65years each constituted 32% (2966/9211); 54.4% (5009/9211); 7.4% (684/9211), 3.8% (350/9211) and 2.2% (201/9211) of females respectively. 16.2% (3431/21,240) demonstrated Hepatitis B surface antigenaemia. The sero-positivity rate was 16.9% (865//5104) between 19-25years, 21.2% (2559/12,039) among 26-45year old individuals. 17.9% (377/2111); 14.1% (83/590) and 7.3% (56/766) of 46-55year old, 56-65year old and >65year old individuals screened were seropositive. The highest sero-positivity rate was found in male young adults aged 19-25years 27.9% (398/1426) and lowest in elderly males 7.4% (28/377). HBe antigen testing rate among HbSAg seropositive individuals was 97.3% (3338/3431). Males constituted 59.7% (1992/3338) and females 40.3% (1345/3338). 25.3% (844/3338) were aged 19-25years; 61.1% (2039/3338) 26-45years; 10.2% (340/3338) 46-55years; 2.7% (90/3338) 56-65years and 0.7% >65years old. HB e antigenaemia was positive in 8.2% (275/3338) of those tested. 41% (113/275); 50.2% (138/275); 5.4% (15/275); 1.8% (5/275) and 1.1 (3/275) of HB e sero-positivity was among age groups 19-25, 26-45, 46-55, 56-65 and > 65year old individuals. Dual sero-positivity rate was highest 13% (113/844) in young adults 19-25years and lowest between 46-55years; 15/340 (4.4%). 4.2% (15/360); 13.5% (69/512); 6.7% (90/1348); 4.6% (10/214); 5% (2/40) and 6.7% (1/15) of males aged 19-25; 26-45; 46-55; 56-65; and >65years had HB e antigenaemia respectively. Among females - 27/293 (9.2%) aged 19-25; 26/500 (5.2%) 26-45; 2/84 (2.4%) 46-55; 1/12 (8.3%) 56-65 and 1/9(11.1%) >65years had dual antigenaemia. In women of childbearing age, 6.9% (53/793) had a dual carriage. Conclusion: Dual hepatitis B surface and envelope antigenaemia are highest in young adult males. This will have significant implications for the development of chronic liver disease and hepatocellular carcinoma.Keywords: adult, Hepatitis B, Nigeria, dual carriage
Procedia PDF Downloads 261611 Evaluation of Microbial Accumulation of Household Wastewater Purified by Advanced Oxidation Process
Authors: Nazlı Çetindağ, Pelin Yılmaz Çetiner, Metin Mert İlgün, Emine Birci, Gizemnur Yıldız Uysal, Özcan Hatipoğlu, Ehsan Tuzcuoğlu, Gökhan Sır
Abstract:
Water scarcity is an unavoidable issue impacting an increasing number of individuals daily, representing a global crisis stemming from swift population growth, urbanization, and excessive resource exploitation. Consequently, solutions that involve the reclamation of wastewater are considered essential. In this context, household wastewater, categorized as greywater, plays a significant role in freshwater used for residential purposes and is attributed to washing. This type of wastewater comprises diverse elements, including organic substances, soaps, detergents, solvents, biological components, and inorganic elements such as certain metal ions and particles. The physical characteristics of wastewater vary depending on its source, whether commercial, domestic, or from a hospital setting. Consequently, the treatment strategy for this wastewater type necessitates comprehensive investigation and appropriate handling. The advanced oxidation process (AOP) emerges as a promising technique associated with the generation of reactive hydroxyl radicals highly effective in oxidizing organic pollutants. This method takes precedence over others like coagulation, flocculation, sedimentation, and filtration due to its avoidance of undesirable by-products. In the current study, the focus was on exploring the feasibility of the AOP for treating actual household wastewater. To achieve this, a laboratory-scale device was designed to effectively target the formed radicals toward organic pollutants, resulting in lower organic compounds in wastewater. Then, the number of microorganisms present in treated wastewater, in addition to the chemical content of the water, was analyzed to determine whether the lab-scale device eliminates microbial accumulation with AOP. This was also an important parameter since microbes can indirectly affect human health and machine hygiene. To do this, water samples were taken from treated and untreated conditions and then inoculated on general purpose agar to track down the total plate count. Analysis showed that AOP might be an option to treat household wastewater and lower microorganism growth.Keywords: usage of household water, advanced oxidation process, water reuse, modelling
Procedia PDF Downloads 50610 Monitoring Memories by Using Brain Imaging
Authors: Deniz Erçelen, Özlem Selcuk Bozkurt
Abstract:
The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons
Procedia PDF Downloads 88609 Internet of Things in Higher Education: Implications for Students with Disabilities
Authors: Scott Hollier, Ruchi Permvattana
Abstract:
The purpose of this abstract is to share the findings of a recently completed disability-related Internet of Things (IoT) project undertaken at Curtin University in Australia. The project focused on identifying how IoT could support people with disabilities with their educational outcomes. To achieve this, the research consisted of an analysis of current literature and interviews conducted with students with vision, hearing, mobility and print disabilities. While the research acknowledged the ability to collect data with IoT is now a fairly common occurrence, its benefits and applicability still need to be grounded back into real-world applications. Furthermore, it is important to consider if there are sections of our society that may benefit from these developments and if those benefits are being fully realised in a rush by large companies to achieve IoT dominance for their particular product or digital ecosystem. In this context, it is important to consider a group which, to our knowledge, has had little specific mainstream focus in the IoT area –people with disabilities. For people with disabilities, the ability for every device to interact with us and with each other has the potential to yield significant benefits. In terms of engagement, the arrival of smart appliances is already offering benefits such as the ability for a person in a wheelchair to give verbal commands to an IoT-enabled washing machine if the buttons are out of reach, or for a blind person to receive a notification on a smartphone when dinner has finished cooking in an IoT-enabled microwave. With clear benefits of IoT being identified for people with disabilities, it is important to also identify what implications there are for education. With higher education being a critical pathway for many people with disabilities in finding employment, the question as to whether such technologies can support the educational outcomes of people with disabilities was what ultimately led to this research project. This research will discuss several significant findings that have emerged from the research in relation to how consumer-based IoT can be used in the classroom to support the learning needs of students with disabilities, how industrial-based IoT sensors and actuators can be used to monitor and improve the real-time learning outcomes for the delivery of lectures and student engagement, and a proposed method for students to gain more control over their learning environment. The findings shared in this presentation are likely to have significant implications for the use of IoT in the classroom through the implementation of affordable and accessible IoT solutions and will provide guidance as to how policies can be developed as the implications of both benefits and risks continue to be considered by educators.Keywords: disability, higher education, internet of things, students
Procedia PDF Downloads 119608 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School
Authors: Martín Pratto Burgos
Abstract:
The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.Keywords: machine-learning, engineering, university, education, computational models
Procedia PDF Downloads 99607 Variables, Annotation, and Metadata Schemas for Early Modern Greek
Authors: Eleni Karantzola, Athanasios Karasimos, Vasiliki Makri, Ioanna Skouvara
Abstract:
Historical linguistics unveils the historical depth of languages and traces variation and change by analyzing linguistic variables over time. This field of linguistics usually deals with a closed data set that can only be expanded by the (re)discovery of previously unknown manuscripts or editions. In some cases, it is possible to use (almost) the entire closed corpus of a language for research, as is the case with the Thesaurus Linguae Graecae digital library for Ancient Greek, which contains most of the extant ancient Greek literature. However, concerning ‘dynamic’ periods when the production and circulation of texts in printed as well as manuscript form have not been fully mapped, representative samples and corpora of texts are needed. Such material and tools are utterly lacking for Early Modern Greek (16th-18th c.). In this study, the principles of the creation of EMoGReC, a pilot representative corpus of Early Modern Greek (16th-18th c.) are presented. Its design follows the fundamental principles of historical corpora. The selection of texts aims to create a representative and balanced corpus that gives insight into diachronic, diatopic and diaphasic variation. The pilot sample includes data derived from fully machine-readable vernacular texts, which belong to 4-5 different textual genres and come from different geographical areas. We develop a hierarchical linguistic annotation scheme, further customized to fit the characteristics of our text corpus. Regarding variables and their variants, we use as a point of departure the bundle of twenty-four features (or categories of features) for prose demotic texts of the 16th c. Tags are introduced bearing the variants [+old/archaic] or [+novel/vernacular]. On the other hand, further phenomena that are underway (cf. The Cambridge Grammar of Medieval and Early Modern Greek) are selected for tagging. The annotated texts are enriched with metalinguistic and sociolinguistic metadata to provide a testbed for the development of the first comprehensive set of tools for the Greek language of that period. Based on a relational management system with interconnection of data, annotations, and their metadata, the EMoGReC database aspires to join a state-of-the-art technological ecosystem for the research of observed language variation and change using advanced computational approaches.Keywords: early modern Greek, variation and change, representative corpus, diachronic variables.
Procedia PDF Downloads 68606 Design, Prototyping and Testing of Manually Operated Teff Seed Cum Fertilizer Drill for Ethiopian Farmers
Authors: Fentahun Ayu Muche, Yonas Mitiku Degu
Abstract:
Ethiopian farmers traditionally sow Teff seeds using the broadcasting method. However, row sowing offers higher grain yields compared to broadcasting. Despite being introduced to row sowing techniques, many farmers prefer broadcasting due to its simplicity; without proper technology, row sowing is time-consuming, labor-intensive, and physically demanding. The use of suitable row Teff seeder technologies can save time, reduce labor requirements, facilitate weed control, and increase productivity. Unfortunately, previously promoted technologies have not gained significant acceptance due to various limitations. The Agricultural Bureau of the Amhara Region, Ethiopia, has confirmed that row sowing technology significantly improves productivity, yielding results up to twice as high as traditional sowing methods. This innovative approach offers a feasible solution for enhancing Teff production in Ethiopia, contributing to greater precision and efficiency in farming practices. This research aims to design, fabricate, and test a Teff seed-cum-fertilizer drill while addressing the shortcomings of earlier technologies. During the conceptual design phase, eight alternatives were proposed, with the rail-type row Teff seed-cum-fertilizer drill selected for its technical and economic feasibility. The chosen design features five rows with adjustable spacing between 15 cm and 25 cm. It also includes an interchangeable metering mechanism for seeding rates of 5 kg/hectare and 10 kg/hectare. A key focus was placed on the metering mechanism to eliminate power transmission via ground traction, thereby mitigating performance issues caused by wheel skidding. The new design uses pinions that roll over two parallel racks suspended by four posts to transmit motion to the metering unit. Detailed analysis of the selected concept and working mechanism was conducted, and the prototype was manufactured according to specifications from the detailed design. Laboratory and field tests of the fabricated prototype demonstrated good metering mechanism efficiency, with no significant differences between rows. However, the performance of the Teff seed-cum-fertilizer drill is highly sensitive to the seed level in the hopper. Therefore, maintaining the recommended seed level is crucial for ensuring uniform seed distribution during farm operations.Keywords: row teff planter, disc metering, scoop metering, rack and pinion, fertilizer applicator, seed drill
Procedia PDF Downloads 14605 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance
Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder
Abstract:
Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes
Procedia PDF Downloads 429604 Design Evaluation Tool for Small Wind Turbine Systems Based on the Simple Load Model
Authors: Jihane Bouabid
Abstract:
The urgency to transition towards sustainable energy sources has revealed itself imperative. Today, in the 21st Century, the intellectual society have imposed technological advancements and improvements, and anticipates expeditious outcomes as an integral component of its relentless pursuit of an elevated standard of living. As a part of empowering human development, driving economic growth and meeting social needs, the access to energy services has become a necessity. As a part of these improvements, we are introducing the project "Mywindturbine" - an interactive web user interface for design and analysis in the field of wind energy, with a particular adherence to the IEC (International Electrotechnical Commission) standard 61400-2 "Wind turbines – Part 2: Design requirements for small wind turbines". Wind turbines play a pivotal role in Morocco's renewable energy strategy, leveraging the nation's abundant wind resources. The IEC 61400-2 standard ensures the safety and design integrity of small wind turbines deployed in Morocco, providing guidelines for performance and safety protocols. The conformity with this standard ensures turbine reliability, facilitates standards alignment, and accelerates the integration of wind energy into Morocco's energy landscape. The aim of the GUI (Graphical User Interface) for engineers and professionals from the field of wind energy systems who would like to design a small wind turbine system following the safety requirements of the international standards IEC 61400-2. The interface provides an easy way to analyze the structure of the turbine machine under normal and extreme load conditions based on the specific inputs provided by the user. The platform introduces an overview to sustainability and renewable energy, with a focus on wind turbines. It features a cross-examination of the input parameters provided from the user for the SLM (Simple Load Model) of small wind turbines, and results in an analysis according to the IEC 61400-2 standard. The analysis of the simple load model encompasses calculations for fatigue loads on blades and rotor shaft, yaw error load on blades, etc. for the small wind turbine performance. Through its structured framework and adherence to the IEC standard, "Mywindturbine" aims to empower professionals, engineers, and intellectuals with the knowledge and tools necessary to contribute towards a sustainable energy future.Keywords: small wind turbine, IEC 61400-2 standard, user interface., simple load model
Procedia PDF Downloads 63603 The Antagonistic/Synergistic Effect of Probiotic Yeast Saccharomyces boulardii on Candida glabrata Adhesion
Authors: Zorica Tomičić, Ružica Tomičić, Peter Raspor
Abstract:
Growing resistance of pathogenic yeast Candida glabrata to many classes of antifungal drugs has stimulated efforts to discover new agents to combat a rising number of invasive C. glabrata infections, which deserves a great deal of concern due to the high mortality rate in immunocompromised populations. One promising strategy is the use of probiotic microorganisms, which, when administered in adequate amounts, confers a health benefit. A selected number of probiotic organisms, Saccharomyces boulardii among them, have been tested as potential biotherapeutic agents. The aim of this study was to investigate the effect of the probiotic yeast S. boulardii on the adhesion of clinical isolates of C. glabrata at different temperatures, pH values, and in the presence of three clinically important antifungal drugs, such as fluconazole, itraconazole and amphotericin B. The method used to assess adhesion was crystal violet staining. The selection of antimycotics concentrations used in the adhesion assay was based on minimum inhibitory concentrations (MICs) obtained by the preliminarily performed microdilution modification of the Reference method for broth dilution antifungal susceptibility testing of yeast (Clinical and Laboratory Standards Institute (CLSI), standard M27-A2). the results showed that despite the nonadhesiveness of S. boulardii cells, probiotic yeast significantly suppressed the adhesion of C. glabrata strains. Besides, at specific strain ratios, a slight stimulatory effect was observed in some C. glabrata strains, which highlights the importance of strain specificity and opens up further research interests. When environmental conditions are considered, temperature and pH significantly influenced co-culture adhesion of C. glabrata and S. boulardii. The adhesion of C. glabrata strains was relatively equally reduced over all tested temperature range (28°C, 37°C, 39°C and 42°C) in the presence of S. boulardii cells, while the adhesion of a few C. glabrata strains were significantly stimulated at 28°C and suppressed at 42°C. Further, the adhesion was highly dependent on pH, with the highest adherence at pH 4 and lowest at pH 8.5. It was observed that S. boulardii did not manage to suppress the adhesion of C. glabrata strains at high pH. Antimycotics on the other hand showed a greater impact, since S. boulardii failed to affect co-culture adhesion at higher antimycotics concentrations. As expected, exposure to various concentrations of amphotericin B significantly reduced the adherence ability of C.glabrata strains both in a single culture and co-culture with S. boulardii. Therefore, it can be speculated that S. boulardii could substitute the effect of antimycotics in a range concentrations and with specific type of strains. This would certainly change the view on the treatment of yeast infections in the future.Keywords: adhesion, antimycotics, candida glabrata, saccharomyces boulardii
Procedia PDF Downloads 68602 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe
Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou
Abstract:
In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine
Procedia PDF Downloads 64