Search results for: awake testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3065

Search results for: awake testing

305 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots

Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha

Abstract:

Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.

Keywords: biosensor, dopamine, fluorescence, quantum dots

Procedia PDF Downloads 365
304 Study on Health Status and Health Promotion Models for Prevention of Cardiovascular Disease in Asylum Seekers at Asylum Seekers Center, Kupang-Indonesia

Authors: Era Dorihi Kale, Sabina Gero, Uly Agustine

Abstract:

Asylum seekers are people who come to other countries to get asylum. In line with that, they also carry the culture and health behavior of their country, which is very different from the new country they currently live in. This situation raises problems, also in the health sector. The approach taken must also be a culturally sensitive approach, where the culture and habits of the refugee's home area are also valued so that the health services provided can be right on target. Some risk factors that already exist in this group are lack of activity, consumption of fast food, smoking, and stress levels that are quite high. Overall this condition will increase the risk of an increased incidence of cardiovascular disease. This research is a descriptive and experimental study. The purpose of this study is to identify health status and develop a culturally sensitive health promotion model, especially related to the risk of cardiovascular disease for asylum seekers in detention homes in the city of Kupang. This research was carried out in 3 stages, stage 1 was conducting a survey of health problems and the risk of asylum seeker cardiovascular disease, Stage 2 developed a health promotion model, and stage 3 conducted a testing model of health promotion carried out. There were 81 respondents involved in this study. The variables measured were: health status, risk of cardiovascular disease and, health promotion models. Method of data collection: Instruments (questionnaires) were distributed to respondents answered for anamnese health status; then, cardiovascular risk measurements were taken. After that, the preparation of information needs and the compilation of booklets on the prevention of cardiovascular disease is carried out. The compiled booklet was then translated into Farsi. After that, the booklet was tested. Respondent characteristics: average lived in Indonesia for 4.38 years, the majority were male (90.1%), and most were aged 15-34 years (90.1%). There are several diseases that are often suffered by asylum seekers, namely: gastritis, headaches, diarrhea, acute respiratory infections, skin allergies, sore throat, cough, and depression. The level of risk for asylum seekers experiencing cardiovascular problems is 4 high risk people, 6 moderate risk people, and 71 low risk people. This condition needs special attention because the number of people at risk is quite high when compared to the age group of refugees. This is very related to the level of stress experienced by the refugees. The health promotion model that can be used is the transactional stress and coping model, using Persian (oral) and English for written information. It is recommended for health practitioners who care for refugees to always pay attention to aspects of culture (especially language) as well as the psychological condition of asylum seekers to make it easier to conduct health care and promotion. As well for further research, it is recommended to conduct research, especially relating to the effect of psychological stress on the risk of cardiovascular disease in asylum seekers.

Keywords: asylum seekers, health status, cardiovascular disease, health promotion

Procedia PDF Downloads 105
303 Shale Gas Accumulation of Over-Mature Cambrian Niutitang Formation Shale in Structure-Complicated Area, Southeastern Margin of Upper Yangtze, China

Authors: Chao Yang, Jinchuan Zhang, Yongqiang Xiong

Abstract:

The Lower Cambrian Niutitang Formation shale (NFS) deposited in the marine deep-shelf environment in Southeast Upper Yangtze (SUY), possess excellent source rock basis for shale gas generation, however, it is currently challenged by being over-mature with strong tectonic deformations, leading to much uncertainty of gas-bearing potential. With emphasis on the shale gas enrichment of the NFS, analyses were made based on the regional gas-bearing differences obtained from field gas-desorption testing of 18 geological survey wells across the study area. Results show that the NFS bears low gas content of 0.2-2.5 m³/t, and the eastern region of SUY is higher than the western region in gas content. Moreover, the methane fraction also presents the similar regional differentiation with the western region less than 10 vol.% while the eastern region generally more than 70 vol.%. Through the analysis of geological theory, the following conclusions are drawn: Depositional environment determines the gas-enriching zones. In the western region, the Dengying Formation underlying the NFS in unconformity contact was mainly plateau facies dolomite with caves and thereby bears poor gas-sealing ability. Whereas the Laobao Formation underling the NFS in eastern region was a set of siliceous rocks of shelf-slope facies, which can effectively prevent the shale gas from escaping away from the NFS. The tectonic conditions control the gas-enriching bands in the SUY, which is located in the fold zones formed by the thrust of the Southern China plate towards to the Sichuan Basin. Compared with the western region located in the trough-like folds, the eastern region at the fold-thrust belts was uplifted early and deformed weakly, resulting in the relatively less mature level and relatively slight tectonic deformation of the NFS. Faults determine whether shale gas can be accumulated in large scale. Four deep and large normal faults in the study area cut through the Niutitang Formation to the Sinian strata, directly causing a large spillover of natural gas in the adjacent areas. For the secondary faults developed within the shale formation, the reverse faults generally have a positive influence on the shale accumulation while the normal faults perform the opposite influence. Overall, shale gas enrichment targets of the NFS, are the areas with certain thickness of siliceous rocks at the basement of the Niutitang Formation, and near the margin of the paleouplift with less developed faults. These findings provide direction for shale gas exploration in South China, and also provide references for the areas with similar geological conditions all over the world.

Keywords: over-mature marine shale, shale gas accumulation, structure-complicated area, Southeast Upper Yangtze

Procedia PDF Downloads 148
302 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches

Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys

Abstract:

Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.

Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites

Procedia PDF Downloads 205
301 Simplified Modeling of Post-Soil Interaction for Roadside Safety Barriers

Authors: Charly Julien Nyobe, Eric Jacquelin, Denis Brizard, Alexy Mercier

Abstract:

The performance of road side safety barriers depends largely on the dynamic interactions between post and soil. These interactions play a key role in the response of barriers to crash testing. In the literature, soil-post interaction is modeled in crash test simulations using three approaches. Many researchers have initially used the finite element approach, in which the post is embedded in a continuum soil modelled by solid finite elements. This method represents a more comprehensive and detailed approach, employing a mesh-based continuum to model the soil’s behavior and its interaction with the post. Although this method takes all soil properties into account, it is nevertheless very costly in terms of simulation time. In the second approach, all the points of the post located at a predefined depth are fixed. Although this approach reduces CPU computing time, it overestimates soil-post stiffness. The third approach involves modeling the post as a beam supported by a set of nonlinear springs in the horizontal directions. For support in the vertical direction, the posts were constrained at a node at ground level. This approach is less costly, but the literature does not provide a simple procedure to determine the constitutive law of the springs The aim of this study is to propose a simple and low-cost procedure to obtain the constitutive law of nonlinear springs that model the soil-post interaction. To achieve this objective, we will first present a procedure to obtain the constitutive law of nonlinear springs thanks to the simulation of a soil compression test. The test consists in compressing the soil contained in the tank by a rigid solid, up to a vertical displacement of 200 mm. The resultant force exerted by the ground on the rigid solid and its vertical displacement are extracted and, a force-displacement curve was determined. The proposed procedure for replacing the soil with springs must be tested against a reference model. The reference model consists of a wooden post embedded into the ground and impacted with an impactor. Two simplified models with springs are studied. In the first model, called Kh-Kv model, the springs are attached to the post in the horizontal and vertical directions. The second Kh model is the one described in the literature. The two simplified models are compared with the reference model according to several criteria: the displacement of a node located at the top of the post in vertical and horizontal directions; displacement of the post's center of rotation and impactor velocity. The results given by both simplified models are very close to the reference model results. It is noticeable that the Kh-Kv model is slightly better than the Kh model. Further, the former model is more interesting than the latter as it involves less arbitrary conditions. The simplified models also reduce the simulation time by a factor 4. The Kh-Kv model can therefore be used as a reliable tool to represent the soil-post interaction in a future research and development of road safety barriers.

Keywords: crash tests, nonlinear springs, soil-post interaction modeling, constitutive law

Procedia PDF Downloads 32
300 Robotics Education Continuity from Diaper Age to Doctorate

Authors: Vesa Salminen, Esa Santakallio, Heikki Ruohomaa

Abstract:

Introduction: The city of Riihimäki has decided robotics on well-being, service and industry as the main focus area on their ecosystem strategy. Robotics is going to be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, also education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The objective of this activity has been to develop education continuity from diaper age to doctorate. The main target of the development activity is to create a unique robotics study entity that enables ongoing robotics studies from preprimary education to university. The aim is also to attract students internationally and supply a skilled workforce to the private sector, capable of the challenges of the future. Methodology: Education instances (high school, second grade, Universities on all levels) in a large area of Tavastia Province have gradually directed their education programs to support this goal. On the other hand, applied research projects have been created to make proof of concept- phases on areal real environment field labs to test technology opportunities and digitalization to change business processes by applying robotic solutions. Customer-oriented applied research projects offer for students in robotics education learning environments to learn new knowledge and content. That is also a learning environment for education programs to adapt and co-evolution. New content and problem-based learning are used in future education modules. Major findings: Joint robotics education entity is being developed in cooperation with the city of Riihimäki (primary education), Syria Education (secondary education) and HAMK (bachelor and master education). The education modules have been developed to enable smooth transitioning from one institute to another. This article is introduced a case study of the change of education of wellbeing education because of digitalization and robotics. Riihimäki's Elderly citizen's service house, Riihikoti, has been working as a field lab for proof-of-concept phases on testing technology opportunities. According to successful case studies also education programs on various levels of education have been changing. Riihikoti has been developed as a physical learning environment for home care and robotics, investigating and developing a variety of digital devices and service opportunities and experimenting and learn the use of equipment. The environment enables the co-development of digital service capabilities in the authentic environment for all interested groups in transdisciplinary cooperation.

Keywords: ecosystem strategy, digitalization and robotics, education continuity, learning environment, transdisciplinary co-operation

Procedia PDF Downloads 176
299 Female Autism Spectrum Disorder and Understanding Rigid Repetitive Behaviors

Authors: Erin Micali, Katerina Tolstikova, Cheryl Maykel, Elizabeth Harwood

Abstract:

Female ASD is seldomly studied separately from males. Further, females with ASD are disproportionately underrepresented in the research at a rate of 3:1 (male to female). As such, much of the current understanding about female rigid repetitive behaviors (RRBs) stems from research’s understanding of male RRBs. This can be detrimental to understanding female ASD because this understanding of female RRBs largely discounts female camouflaging and the possibility that females present their autistic symptoms differently. Current literature suggests that females with ASD engage in fewer RRBs than males with ASD and when females do engage in RRBs, they are likely to engage in more subtle, less overt obsessions and repetitive behaviors than males. Method: The current study utilized a mixed methods research design to identify the type and frequency of RRBs that females with ASD engaged in by using a cross-sectional design. The researcher recruited only females to be part of the present study with the criteria they be at least age six and not have co-occurring cognitive impairment. Results: The researcher collected previous testing data (Autism Diagnostic Interview-Revised (ADI-R), Child or Adolescent/Adult Sensory Profile-2, Autism/ Empathy Quotient, Yale Brown Obsessive Compulsive Checklist, Rigid Repetitive Behavior Checklist (evaluator created list), and demographic questionnaire) from 25 total participants. The participants ages ranged from six to 52. The participants were 96% Caucasion and 4% Latin American. Qualitative analysis found the current participant pool engaged in six RRB themes including repetitive behaviors, socially restrictive behaviors, repetitive speech, difficulty with transition, obsessive behaviors, and restricted interests. The current dataset engaged in socially restrictive behaviors and restrictive interests most frequently. Within the main themes 40 subthemes were isolated, defined, and analyzed. Further, preliminary quantitative analysis was run to determine if age impacted camouflaging behaviors and overall presentation of RRBs. Within this dataset this was not founded. Further qualitative data will be run to determine if this dataset engaged in more overt or subtle RRBs to confirm or rebuff previous research. The researcher intends to run SPSS analysis to determine if there was statistical difference between each RRB theme and overall presentation. Secondly, each participant will be analyzed for presentation of RRB, age, and previous diagnoses. Conclusion: The present study aimed to assist in diagnostic clarity. This was achieved by collecting data from a female only participant pool across the lifespan. Current data aided in clarity of the type of RRBs engage in. A limited sample size was a barrier in this study.

Keywords: autism spectrum disorder, camouflaging, rigid repetitive behaviors, gender disparity

Procedia PDF Downloads 145
298 Development and Validation of a Turbidimetric Bioassay to Determine the Potency of Ertapenem Sodium

Authors: Tahisa M. Pedroso, Hérida R. N. Salgado

Abstract:

The microbiological turbidimetric assay allows the determination of potency of the drug, by measuring the turbidity (absorbance), caused by inhibition of microorganisms by ertapenem sodium. Ertapenem sodium (ERTM), a synthetic antimicrobial agent of the class of carbapenems, shows action against Gram-negative, Gram-positive, aerobic and anaerobic microorganisms. Turbidimetric assays are described in the literature for some antibiotics, but this method is not described for ertapenem. The objective of the present study was to develop and validate a simple, sensitive, precise and accurate microbiological assay by turbidimetry to quantify ertapenem sodium injectable as an alternative to the physicochemical methods described in the literature. Several preliminary tests were performed to choose the following parameters: Staphylococcus aureus ATCC 25923, IAL 1851, 8 % of inoculum, BHI culture medium, and aqueous solution of ertapenem sodium. 10.0 mL of sterile BHI culture medium were distributed in 20 tubes. 0.2 mL of solutions (standard and test), were added in tube, respectively S1, S2 and S3, and T1, T2 and T3, 0.8 mL of culture medium inoculated were transferred to each tube, according parallel lines 3 x 3 test. The tubes were incubated in shaker Marconi MA 420 at a temperature of 35.0 °C ± 2.0 °C for 4 hours. After this period, the growth of microorganisms was inhibited by addition of 0.5 mL of 12% formaldehyde solution in each tube. The absorbance was determined in Quimis Q-798DRM spectrophotometer at a wavelength of 530 nm. An analytical curve was constructed to obtain the equation of the line by the least-squares method and the linearity and parallelism was detected by ANOVA. The specificity of the method was proven by comparing the response obtained for the standard and the finished product. The precision was checked by testing the determination of ertapenem sodium in three days. The accuracy was determined by recovery test. The robustness was determined by comparing the results obtained by varying wavelength, brand of culture medium and volume of culture medium in the tubes. Statistical analysis showed that there is no deviation from linearity in the analytical curves of standard and test samples. The correlation coefficients were 0.9996 and 0.9998 for the standard and test samples, respectively. The specificity was confirmed by comparing the absorbance of the reference substance and test samples. The values obtained for intraday, interday and between analyst precision were 1.25%; 0.26%, 0.15% respectively. The amount of ertapenem sodium present in the samples analyzed, 99.87%, is consistent. The accuracy was proven by the recovery test, with value of 98.20%. The parameters varied did not affect the analysis of ertapenem sodium, confirming the robustness of this method. The turbidimetric assay is more versatile, faster and easier to apply than agar diffusion assay. The method is simple, rapid and accurate and can be used in routine analysis of quality control of formulations containing ertapenem sodium.

Keywords: ertapenem sodium, turbidimetric assay, quality control, validation

Procedia PDF Downloads 394
297 Towards an Environmental Knowledge System in Water Management

Authors: Mareike Dornhoefer, Madjid Fathi

Abstract:

Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.

Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management

Procedia PDF Downloads 221
296 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends

Authors: Reena Aggarwal, Neha Kestwal

Abstract:

The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.

Keywords: Himalayan nettle, sustainable, shrinkage, blending

Procedia PDF Downloads 242
295 Prevalence of Positive Serology for Celiac Disease in Children With Autism Spectrum Disorder

Authors: A. Venkatakrishnan, M. Juneja, S. Kapoor

Abstract:

Background: Gastrointestinal dysfunction is an emerging co morbidity seen in autism and may further strengthen the association between autism and celiac disease. This is supported by increased rates (22-70%) of gastrointestinal symptoms like diarrhea, constipation, abdominal discomfort/pain, and gastrointestinal inflammation in children with the etiology of autism is still elusive. In addition to genetic factors, environmental factors such as toxin exposure, intrauterine exposure to certain teratogenic drugs, are being proposed as possible contributing factors in the etiology of Autism Spectrum Disorders (ASD) in cognizance with reports of increased gut permeability and high rates of gastrointestinal symptoms noted in children with ASD, celiac disease has also been proposed as a possible etiological factor. Despite insufficient evidence regarding the benefit of restricted diets in Autism, GFD has been promoted as an alternative treatment for ASD. This study attempts to discern any correlation between ASD and celiac disease. Objective: This cross sectional study aims to determine the proportion of celiac disease in children with ASD. Methods: Study included 155 participants aged 2-12 yrs, diagnosed as ASD as per DSM-5 attending the child development center at a tertiary care hospital in Northern India. Those on gluten free diet or having other autoimmune conditions were excluded. A detailed Performa was filled which included sociodemographic details, history of gastrointestinal symptoms, anthropometry, systemic examination, and pertinent psychological testing was done using was assessed using Developmental Profile-3(DP-3) for Developmental Quotient, Childhood Autism Rating Scale-2 (CARS-2) for severity of ASD, Vineland Adaptive Behavior Scales (VABS) for adaptive behavior, Child Behavior Checklist (CBCL) for behavioral problems and BAMBI (Brief Autism Mealtime Behavior Scales) for feeding problems. Screening for celiac was done by TTG-IgA levels, and total serum IgA levels were measured to exclude IgA deficiency. Those with positive screen were further planned for HLA typing and endoscopic biopsy. Results: A total of 155 cases were included, out of which 5 had low IgA levels and were hence excluded from the study. The rest 150 children had TTG levels below the ULN and normal total serum IgA level. History of Gastrointestinal symptoms was present in 51 (34%) cases abdominal pain was the most frequent complaint (16.6%), followed by constipation (12.6%). Diarrhea was seen in 8 %. Gastrointestinal symptoms were significantly more common in children with ASD above 5 yrs (p-value 0.006) and those who were verbal (p = 0.000). There was no significant association between socio-demographic factors, anthropometric data, or severity of autism with gastrointestinal symptoms. Conclusion: None of the150 patients with ASD had raised TTG levels; hence no association was found between ASD and celiac disease. There is no justification for routine screening for celiac disease in children with ASD. Further studies are warranted to evaluate association of Non Celiac Gluten Sensitivity with ASD and any role of gluten-free diet in such patients.

Keywords: autism, celiac, gastrointestinal, gluten

Procedia PDF Downloads 122
294 Alternative Fuel Production from Sewage Sludge

Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova

Abstract:

The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.

Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge

Procedia PDF Downloads 136
293 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis

Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero

Abstract:

Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.

Keywords: chemometrics, microNIR, microplastics, urban plastic waste

Procedia PDF Downloads 165
292 Use of Progressive Feedback for Improving Team Skills and Fair Marking of Group Tasks

Authors: Shaleeza Sohail

Abstract:

Self, and peer evaluations are some of the main components in almost all group assignments and projects in higher education institutes. These evaluations provide students an opportunity to better understand the learning outcomes of the assignment and/or project. A number of online systems have been developed for this purpose that provides automated assessment and feedback of students’ contribution in a group environment based on self and peer evaluations. All these systems lack a progressive aspect of these assessments and feedbacks which is the most crucial factor for ongoing improvement and life-long learning. In addition, a number of assignments and projects are designed in a manner that smaller or initial assessment components lead to a final assignment or project. In such cases, the evaluation and feedback may provide students an insight into their performance as a group member for a particular component after the submission. Ideally, it should also create an opportunity to improve for next assessment component as well. Self and Peer Progressive Assessment and Feedback System encourages students to perform better in the next assessment by providing a comparative analysis of the individual’s contribution score on an ongoing basis. Hence, the student sees the change in their own contribution scores during the complete project based on smaller assessment components. Self-Assessment Factor is calculated as an indicator of how close the self-perception of the student’s own contribution is to the perceived contribution of that student by other members of the group. Peer-Assessment Factor is calculated to compare the perception of one student’s contribution as compared to the average value of the group. Our system also provides a Group Coherence Factor which shows collectively how group members contribute to the final submission. This feedback is provided for students and teachers to visualize the consistency of members’ contribution perceived by its group members. Teachers can use these factors to judge the individual contributions of the group members in the combined tasks and allocate marks/grades accordingly. This factor is shown to students for all groups undertaking same assessment, so the group members can comparatively analyze the efficiency of their group as compared to other groups. Our System provides flexibility to the instructors for generating their own customized criteria for self and peer evaluations based on the requirements of the assignment. Students evaluate their own and other group members’ contributions on the scale from significantly higher to significantly lower. The preliminary testing of the prototype system is done with a set of predefined cases to explicitly show the relation of system feedback factors to the case studies. The results show that such progressive feedback to students can be used to motivate self-improvement and enhanced team skills. The comparative group coherence can promote a better understanding of the group dynamics in order to improve team unity and fair division of team tasks.

Keywords: effective group work, improvement of team skills, progressive feedback, self and peer assessment system

Procedia PDF Downloads 191
291 Model Tests on Geogrid-Reinforced Sand-Filled Embankments with a Cover Layer under Cyclic Loading

Authors: Ma Yuan, Zhang Mengxi, Akbar Javadi, Chen Longqing

Abstract:

The structure of sand-filled embankment with cover layer is treated with tipping clay modified with lime on the outside of the packing, and the geotextile is placed between the stuffing and the clay. The packing is usually river sand, and the improved clay protects the sand core against rainwater erosion. The sand-filled embankment with cover layer has practical problems such as high filling embankment, construction restriction, and steep slope. The reinforcement can be applied to the sand-filled embankment with cover layer to solve the complicated problems such as irregular settlement caused by poor stability of the embankment. At present, the research on the sand-filled embankment with cover layer mainly focuses on the sand properties, construction technology, and slope stability, and there are few studies in the experimental field, the deformation characteristics and stability of reinforced sand-filled embankment need further study. In addition, experimental research is relatively rare when the cyclic load is considered in tests. A subgrade structure of geogrid-reinforced sand-filled embankment with cover layer was proposed. The mechanical characteristics, the deformation properties, reinforced behavior and the ultimate bearing capacity of the embankment structure under cyclic loading were studied. For this structure, the geogrids in the sand and the tipping soil are through the geotextile which is arranged in sections continuously so that the geogrids can cross horizontally. Then, the Unsaturated/saturated Soil Triaxial Test System of Geotechnical Consulting and Testing Systems (GCTS), USA was modified to form the loading device of this test, and strain collector was used to measuring deformation and earth pressure of the embankment. A series of cyclic loading model tests were conducted on the geogrid-reinforced sand-filled embankment with a cover layer under a different number of reinforcement layers, the length of reinforcement and thickness of the cover layer. The settlement of the embankment, the normal cumulative deformation of the slope and the earth pressure were studied under different conditions. Besides cyclic loading model tests, model experiments of embankment subjected cyclic-static loading was carried out to analyze ultimate bearing capacity with different loading. The experiment results showed that the vertical cumulative settlement under long-term cyclic loading increases with the decrease of the number of reinforcement layers, length of the reinforcement arrangement and thickness of the tipping soil. Meanwhile, these three factors also have an influence on the decrease of the normal deformation of the embankment slope. The earth pressure around the loading point is significantly affected by putting geogrid in a model embankment. After cyclic loading, the decline of ultimate bearing capacity of the reinforced embankment can be effectively reduced, which is contrary to the unreinforced embankment.

Keywords: cyclic load; geogrid; reinforcement behavior; cumulative deformation; earth pressure

Procedia PDF Downloads 122
290 Mechanical Testing of Composite Materials for Monocoque Design in Formula Student Car

Authors: Erik Vassøy Olsen, Hirpa G. Lemu

Abstract:

Inspired by the Formula-1 competition, IMechE (Institute of Mechanical Engineers) and Formula SAE (Society of Mechanical Engineers) organize annual competitions for University and College students worldwide to compete with a single-seat race car they have designed and built. The design of the chassis or the frame is a key component of the competition because the weight and stiffness properties are directly related with the performance of the car and the safety of the driver. In addition, a reduced weight of the chassis has a direct influence on the design of other components in the car. Among others, it improves the power to weight ratio and the aerodynamic performance. As the power output of the engine or the battery installed in the car is limited to 80 kW, increasing the power to weight ratio demands reduction of the weight of the chassis, which represents the major part of the weight of the car. In order to reduce the weight of the car, ION Racing team from the University of Stavanger, Norway, opted for a monocoque design. To ensure fulfilment of the above-mentioned requirements of the chassis, the monocoque design should provide sufficient torsional stiffness and absorb the impact energy in case of a possible collision. The study reported in this article is based on the requirements for Formula Student competition. As part of this study, diverse mechanical tests were conducted to determine the mechanical properties and performances of the monocoque design. Upon a comprehensive theoretical study of the mechanical properties of sandwich composite materials and the requirements of monocoque design in the competition rules, diverse tests were conducted including 3-point bending test, perimeter shear test and test for absorbed energy. The test panels were homemade and prepared with an equivalent size of the side impact zone of the monocoque, i.e. 275 mm x 500 mm so that the obtained results from the tests can be representative. Different layups of the test panels with identical core material and the same number of layers of carbon fibre were tested and compared. Influence of the core material thickness was also studied. Furthermore, analytical calculations and numerical analysis were conducted to check compliance to the stated rules for Structural Equivalency with steel grade SAE/AISI 1010. The test results were also compared with calculated results with respect to bending and torsional stiffness, energy absorption, buckling, etc. The obtained results demonstrate that the material composition and strength of the composite material selected for the monocoque design has equivalent structural properties as a welded frame and thus comply with the competition requirements. The developed analytical calculation algorithms and relations will be useful for future monocoque designs with different lay-ups and compositions.

Keywords: composite material, Formula student, ION racing, monocoque design, structural equivalence

Procedia PDF Downloads 504
289 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis

Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan

Abstract:

The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.

Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level

Procedia PDF Downloads 211
288 ChatGPT Performs at the Level of a Third-Year Orthopaedic Surgery Resident on the Orthopaedic In-training Examination

Authors: Diane Ghanem, Oscar Covarrubias, Michael Raad, Dawn LaPorte, Babar Shafiq

Abstract:

Introduction: Standardized exams have long been considered a cornerstone in measuring cognitive competency and academic achievement. Their fixed nature and predetermined scoring methods offer a consistent yardstick for gauging intellectual acumen across diverse demographics. Consequently, the performance of artificial intelligence (AI) in this context presents a rich, yet unexplored terrain for quantifying AI's understanding of complex cognitive tasks and simulating human-like problem-solving skills. Publicly available AI language models such as ChatGPT have demonstrated utility in text generation and even problem-solving when provided with clear instructions. Amidst this transformative shift, the aim of this study is to assess ChatGPT’s performance on the orthopaedic surgery in-training examination (OITE). Methods: All 213 OITE 2021 web-based questions were retrieved from the AAOS-ResStudy website. Two independent reviewers copied and pasted the questions and response options into ChatGPT Plus (version 4.0) and recorded the generated answers. All media-containing questions were flagged and carefully examined. Twelve OITE media-containing questions that relied purely on images (clinical pictures, radiographs, MRIs, CT scans) and could not be rationalized from the clinical presentation were excluded. Cohen’s Kappa coefficient was used to examine the agreement of ChatGPT-generated responses between reviewers. Descriptive statistics were used to summarize the performance (% correct) of ChatGPT Plus. The 2021 norm table was used to compare ChatGPT Plus’ performance on the OITE to national orthopaedic surgery residents in that same year. Results: A total of 201 were evaluated by ChatGPT Plus. Excellent agreement was observed between raters for the 201 ChatGPT-generated responses, with a Cohen’s Kappa coefficient of 0.947. 45.8% (92/201) were media-containing questions. ChatGPT had an average overall score of 61.2% (123/201). Its score was 64.2% (70/109) on non-media questions. When compared to the performance of all national orthopaedic surgery residents in 2021, ChatGPT Plus performed at the level of an average PGY3. Discussion: ChatGPT Plus is able to pass the OITE with a satisfactory overall score of 61.2%, ranking at the level of third-year orthopaedic surgery residents. More importantly, it provided logical reasoning and justifications that may help residents grasp evidence-based information and improve their understanding of OITE cases and general orthopaedic principles. With further improvements, AI language models, such as ChatGPT, may become valuable interactive learning tools in resident education, although further studies are still needed to examine their efficacy and impact on long-term learning and OITE/ABOS performance.

Keywords: artificial intelligence, ChatGPT, orthopaedic in-training examination, OITE, orthopedic surgery, standardized testing

Procedia PDF Downloads 92
287 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 241
286 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 261
285 The Photovoltaic Panel at End of Life: Experimental Study of Metals Release

Authors: M. Tammaro, S. Manzo, J. Rimauro, A. Salluzzo, S. Schiavo

Abstract:

The solar photovoltaic (PV) modules are considered to have a negligible environmental impact compared to the fossil energy. Therefore also the waste management and the corresponding potential environmental hazard needs to be considered. The case of the photovoltaic panel is unique because the time lag from the manufacturing to the decommissioning as waste usually takes 25-30 years. Then the environmental hazard associated with end life of PV panels has been largely related to their metal contents. The principal concern regards the presence of heavy metals as Cd in thin film (TF) modules or Pb and Cr in crystalline silicon (c-Si) panels. At the end of life of PV panels, these dangerous substances could be released in the environment, if special requirements for their disposal are not adopted. Nevertheless, in literature, only a few experimental study about metal emissions from silicon crystalline/thin film panels and the corresponding environmental effect are present. As part of a study funded by the Italian national consortium for the waste collection and recycling (COBAT), the present work was aimed to analyze experimentally the potential release into the environment of hazardous elements, particularly metals, from PV waste. In this paper, for the first time, eighteen releasable metals a large number of photovoltaic panels, by c-Si and TF, manufactured in the last 30 years, together with the environmental effects by a battery of ecotoxicological tests, were investigated. Leaching tests are conducted on the crushed samples of PV module. The test is conducted according to Italian and European Standard procedure for hazard assessment of the granular waste and of the sludge. The sample material is shaken for 24 hours in HDPE bottles with an overhead mixer Rotax 6.8 VELP at indoor temperature and using pure water (18 MΩ resistivity) as leaching solution. The liquid-to-solid ratio was 10 (L/S=10, i.e. 10 liters of water per kg of solid). The ecotoxicological tests were performed in the subsequent 24 hours. A battery of toxicity test with bacteria (Vibrio fisheri), algae (Pseudochirneriella subcapitata) and crustacea (Daphnia magna) was carried out on PV panel leachates obtained as previously described and immediately stored in dark and at 4°C until testing (in the next 24 hours). For understand the actual pollution load, a comparison with the current European and Italian benchmark limits was performed. The trend of leachable metal amount from panels in relation to manufacturing years was then highlighted in order to assess the environmental sustainability of PV technology over time. The experimental results were very heterogeneous and show that the photovoltaic panels could represent an environmental hazard. The experimental results showed that the amounts of some hazardous metals (Pb, Cr, Cd, Ni), for c-Si and TF, exceed the law limits and they are a clear indication of the potential environmental risk of photovoltaic panels "as a waste" without a proper management.

Keywords: photovoltaic panel, environment, ecotoxicity, metals emission

Procedia PDF Downloads 260
284 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 135
283 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 309
282 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 133
281 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum

Authors: Thomas J. Bell III

Abstract:

This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.

Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation

Procedia PDF Downloads 108
280 Microbial Contamination of Cell Phones of Health Care Workers: Case Study in Mampong Municipal Government Hospital, Ghana

Authors: Francis Gyapong, Denis Yar

Abstract:

The use of cell phones has become an indispensable tool in the hospital's settings. Cell phones are used in hospitals without restrictions regardless of their unknown microbial load. However, the indiscriminate use of mobile devices, especially at health facilities, can act as a vehicle for transmitting pathogenic bacteria and other microorganisms. These potential pathogens become exogenous sources of infection for the patients and are also a potential health hazard for self and as well as family members. These are a growing problem in many health care institutions. Innovations in mobile communication have led to better patient care in diabetes, asthma, and increased in vaccine uptake via SMS. Notwithstanding, the use of cell phones can be a great potential source for nosocomial infections. Many studies reported heavy microbial contamination of cell phones among healthcare workers and communities. However, limited studies have been reported in our region on bacterial contamination on cell phones among healthcare workers. This study assessed microbial contamination of cell phones of health care workers (HCWs) at the Mampong Municipal Government Hospital (MMGH), Ghana. A cross-sectional design was used to characterize bacterial microflora on cell phones of HCWs at the MMGH. A total of thirty-five (35) swab samples of cell phones of HCWs at the Laboratory, Dental Unit, Children’s Ward, Theater and Male ward were randomly collected for laboratory examinations. A suspension of the swab samples was each streak on blood and MacConkey agar and incubated at 37℃ for 48 hours. Bacterial isolates were identified using appropriate laboratory and biochemical tests. Kirby-Bauer disc diffusion method was used to determine the antimicrobial sensitivity tests of the isolates. Data analysis was performed using SPSS version 16. All mobile phones sampled were contaminated with one or more bacterial isolates. Cell phones from the Male ward, Dental Unit, Laboratory, Theatre and Children’s ward had at least three different bacterial isolates; 85.7%, 71.4%, 57.1% and 28.6% for both Theater and Children’s ward respectively. Bacterial contaminants identified were Staphylococcus epidermidis (37%), Staphylococcus aureus (26%), E. coli (20%), Bacillus spp. (11%) and Klebsiella spp. (6 %). Except for the Children ward, E. coli was isolated at all study sites and predominant (42.9%) at the Dental Unit while Klebsiella spp. (28.6%) was only isolated at the Children’s ward. Antibiotic sensitivity testing of Staphylococcus aureus indicated that they were highly sensitive to cephalexin (89%) tetracycline (80%), gentamycin (75%), lincomycin (70%), ciprofloxacin (67%) and highly resistant to ampicillin (75%). Some of these bacteria isolated are potential pathogens and their presence on cell phones of HCWs could be transmitted to patients and their families. Hence strict hand washing before and after every contact with patient and phone be enforced to reduce the risk of nosocomial infections.

Keywords: mobile phones, bacterial contamination, patients, MMGH

Procedia PDF Downloads 104
279 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf

Authors: Simran Gill, Samuel Bartels

Abstract:

Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.

Keywords: broadleaf weeds, nitrogen, iron, turfgrass

Procedia PDF Downloads 75
278 Inhibition of Food Borne Pathogens by Bacteriocinogenic Enterococcus Strains

Authors: Neha Farid

Abstract:

Due to the abuse of antimicrobial medications in animal feed, the occurrence of multi-drug resistant (MDR) pathogens in foods is currently a growing public health concern on a global scale. MDR infections have the potential to penetrate the food chain by posing a serious risk to both consumers and animals. Food pathogens are those biological agents that have the tendency to cause pathogenicity in the host body upon ingestion. The major reservoirs of foodborne pathogens include food-producing fauna like cows, pigs, goats, sheep, deer, etc. The intestines of these animals are highly condensed with several different types of food pathogens. Bacterial food pathogens are the main cause of foodborne disease in humans; almost 66% of the reported cases of food illness in a year are caused by the infestation of bacterial food pathogens. When ingested, these pathogens reproduce and survive or form different kinds of toxins inside host cells causing severe infections. The genus Listeria consists of gram-positive, rod-shaped, non-spore-forming bacteria. The disease caused by Listeria monocytogenes is listeriosis or gastroenteritis, which induces fever, vomiting, and severe diarrhea in the affected body. Campylobacter jejuni is a gram-negative, curved-rod-shaped bacteria causing foodborne illness. The major source of Campylobacter jejuni is livestock and poultry; particularly, chicken is highly colonized with Campylobacter jejuni. Serious public health concerns include the widespread growth of bacteria that are resistant to antibiotics and the slowing in the discovery of new classes of medicines. The objective of this study is to provide some potential antibacterial activities with certain broad-range antibiotics and our desired bacteriocins, i.e., Enterococcus faecium from specific strains preventing microbial contamination pathways in order to safeguard the food by lowering food deterioration, contamination, and foodborne illnesses. The food pathogens were isolated from various sources of dairy products and meat samples. The isolates were tested for the presence of Listeria and Campylobacter by gram staining and biochemical testing. They were further sub-cultured on selective media enriched with the growth supplements for Listeria and Campylobacter. All six strains of Listeria and Campylobacter were tested against ten antibiotics. Campylobacter strains showed resistance against all the antibiotics, whereas Listeria was found to be resistant only against Nalidixic Acid and Erythromycin. Further, the strains were tested against the two bacteriocins isolated from Enterococcus faecium. It was found that bacteriocins showed better antimicrobial activity against food pathogens. They can be used as a potential antimicrobial for food preservation. Thus, the study concluded that natural antimicrobials could be used as alternatives to synthetic antimicrobials to overcome the problem of food spoilage and severe food diseases.

Keywords: food pathogens, listeria, campylobacter, antibiotics, bacteriocins

Procedia PDF Downloads 72
277 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation

Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma

Abstract:

Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.

Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling

Procedia PDF Downloads 143
276 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 343