Search results for: simple sensors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4223

Search results for: simple sensors

743 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 478
742 Selective Effect of Occipital Alpha Transcranial Alternating Current Stimulation in Perception and Working Memory

Authors: Andreina Giustiniani, Massimiliano Oliveri

Abstract:

Rhythmic activity in different frequencies could subserve distinct functional roles during visual perception and visual mental imagery. In particular, alpha band activity is thought to play a role in active inhibition of both task-irrelevant regions and processing of non-relevant information. In the present blind placebo-controlled study we applied alpha transcranial alternating current stimulation (tACS) in the occipital cortex both during a basic visual perception and a visual working memory task. To understand if the role of alpha is more related to a general inhibition of distractors or to an inhibition of task-irrelevant regions, we added a non visual distraction to both the tasks.Sixteen adult volunteers performed both a simple perception and a working memory task during 10 Hz tACS. The electrodes were placed over the left and right occipital cortex, the current intensity was 1 mA peak-to-baseline. Sham stimulation was chosen as control condition and in order to elicit the skin sensation similar to the real stimulation, electrical stimulation was applied for short periods (30 s) at the beginning of the session and then turned off. The tasks were split in two sets, in one set distracters were included and in the other set, there were no distracters. Motor interference was added by changing the answer key after subjects completed the first set of trials.The results show that alpha tACS improves working memory only when no motor distracters are added, suggesting a role of alpha tACS in inhibiting non-relevant regions rather than in a general inhibition of distractors. Additionally, we found that alpha tACS does not affect accuracy and hit rates during the visual perception task. These results suggest that alpha activity in the occipital cortex plays a different role in perception and working memory and it could optimize performance in tasks in which attention is internally directed, as in this working memory paradigm, but only when there is not motor distraction. Moreover, alpha tACS improves working memory performance by means of inhibition of task-irrelevant regions while it does not affect perception.

Keywords: alpha activity, interference, perception, working memory

Procedia PDF Downloads 248
741 The Effects of a Nursing Dignity Care Program on Patients’ Dignity in Care

Authors: Yea-Pyng Lin

Abstract:

Dignity is a core element of nursing care. Maintaining the dignity of patients is an important issue because the health and recovery of patients can be adversely affected by a lack of dignity in their care. The aim of this study was to explore the effects of a nursing dignity care program upon patients’ dignity in care. A quasi-experimental research design was implemented. Nurses were recruited by purposive sampling, and their patients were recruited by simple random sampling. Nurses in the experimental group received the nursing educational program on dignity care, while nurses in the control group received in-service education as usual. Data were collected via two instruments: the dignity in care scale for nurses and the dignity in care scale to patients, both of which were developed by the researcher. Both questionnaires consisted of three domains: agreement, importance, and frequencies of providing dignity care. A total of 178 nurses in the experimental group and 193 nurses in the control group completed the pretest and the follow-up evaluations at the first month, the third month, and the sixth month. The number of patients who were cared for by the nurses in the experimental group was 94 in the pretest. The number of patients in the post-test at the first, third, and sixth months were 91, 85, and 77, respectively. In the control group, 88 patients completed the II pretest, and 80 filled out the post-test at the first month, 77 at the third, and 74 at the sixth month. The major findings revealed the scores of agreement domain among nurses in the experimental group were found significantly different from those who in the control group at each point of time. The scores of importance domain between these two groups also displayed significant differences at pretest and the first month of post-test. Moreover, the frequencies of proving dignity care to patients were significant at pretest, the third month and sixth month of post-test. However, the experimental group had only significantly different from those who in the control group on the frequencies of receiving dignity care especially in the items of ‘privacy care,’ ‘communication care,’ and ‘emotional care’ for the patients. The results show that the nursing program on dignity care could increase nurses’ dignity care for patients in three domains of agreement, importance, and frequencies of providing dignity care. For patients, only the frequencies of receiving dignity care were significantly increased. Therefore, the nursing program on dignity care could be applicable for nurses’ in-service education and practice to enhance the ability of nurses to care for patient’s dignity.

Keywords: nurses, patients, dignity care, quasi-experimental, nursing education

Procedia PDF Downloads 461
740 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 260
739 Knowledge of Risk Factors and Health Implications of Fast Food Consumption among Undergraduate in Nigerian Polytechnic

Authors: Adebusoye Michael, Anthony Gloria, Fasan Temitope, Jacob Anayo

Abstract:

Background: The culture of fast food consumption has gradually become a common lifestyle in Nigeria especially among young people in urban areas, in spite of the associated adverse health consequences. The adolescent pattern of fast foods consumption and their perception of this practice, as a risk factor for Non-Communicable Diseases (NCDs), have not been fully explored. This study was designed to assess fast food consumption pattern and the perception of it as a risk factor for NCDs among undergraduates of Federal Polytechnic, Bauchi. Methodology: The study was descriptive cross-sectional in design. One hundred and eighty-five students were recruited using systematic random sampling method from the two halls of residence. A structured questionnaire was used to assess the consumption pattern of fast foods. Data collected from the questionnaires were analysed using statistical package for the social sciences (SPSS) version 16. Simple descriptive statistics, such as frequency counts and percentages were used to interpret the data. Results: The age range of respondents was 18-34 years, 58.4% were males, 93.5% singles and 51.4% of their parents were employed. The majority (100%) were aware of fast foods and (75%) agreed to its implications as NCD. Fast foods consumption distribution included meat pie (4.9%), beef roll/ sausage (2.7%), egg roll (13.5%), doughnut (16.2%), noodles(18%) and carbonated drinks (3.8%). 30.3% consumed thrice in a week and 71% attached workload to high consumption of fast food. Conclusion: It was revealed that a higher social pressure from peers, time constraints, class pressure and school programme had the strong influence on high percentages of higher institutions’ students consume fast foods and therefore nutrition educational campaigns for campus food outlets or vendors and behavioural change communication on healthy nutrition and lifestyles among young people are hereby advocated.

Keywords: fast food consumption, Nigerian polytechnic, risk factors, undergraduate

Procedia PDF Downloads 466
738 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 264
737 Effect of Three Desensitizers on Dentinal Tubule Occlusion and Bond Strength of Dentin Adhesives

Authors: Zou Xuan, Liu Hongchen

Abstract:

The ideal dentin desensitizing agent should not only have good biological safety, simple clinical operation mode, the superior treatment effect, but also should have a durable effect to resist the oral environmental temperature change and oral mechanical abrasion, so as to achieve a persistent desensitization effect. Also, when using desensitizing agent to prevent the post-operative hypersensitivity, we should not only prevent it from affecting crowns’ retention, but must understand its effects on bond strength of dentin adhesives. There are various of desensitizers and dentin adhesives in clinical treatment. They have different chemical or physical properties. Whether the use of desensitizing agent would affect the bond strength of dentin adhesives still need further research. In this in vitro study, we built the hypersensitive dentin model and post-operative dentin model, to evaluate the sealing effects and durability on exposed tubule by three different dentin desensitizers and to evaluate the sealing effects and the bond strength of dentin adhesives after using three different dentin desensitizers on post-operative dentin. The result of this study could provide some important references for clinical use of dentin desensitizing agent. 1. As to the three desensitizers, the hypersensitive dentin model was built to evaluate their sealing effects on exposed tubule by SEM observation and dentin permeability analysis. All of them could significantly reduce the dentin permeability. 2. Test specimens of three groups treated by desensitizers were subjected to aging treatment with 5000 times thermal cycling and toothbrush abrasion, and then dentin permeability was measured to evaluate the sealing durability of these three desensitizers on exposed tubule. The sealing durability of three groups were different. 3. The post-operative dentin model was built to evaluate the sealing effects of the three desensitizers on post-operative dentin by SEM and methylene blue. All of three desensitizers could reduce the dentin permeability significantly. 4. The influences of three desensitizers on the bonding efficiency of total-etch and self-etch adhesives were evaluated with the micro-tensile bond strength study and bond interface morphology observation. The dentin bond strength for Green or group was significantly lower than the other two groups (P<0.05).

Keywords: dentin, desensitizer, dentin permeability, thermal cycling, micro-tensile bond strength

Procedia PDF Downloads 388
736 Value Clusters of Grade 9 Teachers in the District of Trece Martires City, Division of Cavite: Basis for a Revised Values Education Program (RVEP)"

Authors: Juland D. Salayo

Abstract:

With numerous innovations introduced in the Philippine educational system, the country’s struggle of materializing its national goal of transforming lives ends with great loss. Many agree that the failure to emerge the integral values of the program, framework and the implementers impedes realization. Employing a descriptive-correlational method, it aimed to determine the value clusters of the Grade 9 teachers as assessed by themselves and by the students, the significant difference of the assessed values and the significant difference on the values based on their profile. Respondents were composed of sixty-nine (69) teachers and three hundred forty (340) students using simple random sampling. Through a survey-questionnaire, the study revealed that the teachers have high regards on their self-reliance, honesty and trustworthiness, obedience, politeness and respect and self-discipline and spirituality. In contrast, they have ranked the following values fairly: justice and fairness, courage, responsibility and punctuality and nationalism and patriotism. Having assessed by the students, they have highly regarded their teachers’ self-reliance, responsibility and punctuality, obedience, politeness and respect and fair play and sportsmanship. On the other hand, the student-respondents made a low assessment on the level of the teachers’ justice and fairness, nationalism and patriotism, honesty and trustworthiness and excellence. Using t-test, it showed that there is a significant difference between the assessments of the respondents. Finally, among the demographic profiles, only civil status and age rejected the hypothesis. The following were recommended: provide educators value-enhancement trainings and conferences, organize value-oriented organizations and activities, and make intensive value-campaigns heightening the low-assessed values. Thus, a Revised Values Education Program (RVEP) was made to further meet the objectives of the program, address the needs of its clienteles, and responding to the demands of both education and society towards excellence in service, social and economic revolution, and constructive national goals which are based from integral values.

Keywords: values, value clusters, values education program, values education, teachers' assessed values

Procedia PDF Downloads 284
735 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor

Authors: Pooja Seth, Shruti Aggarwal

Abstract:

Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.

Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method

Procedia PDF Downloads 398
734 Constitutive Androstane Receptor (CAR) Inhibitor CINPA1 as a Tool to Understand CAR Structure and Function

Authors: Milu T. Cherian, Sergio C. Chai, Morgan A. Casal, Taosheng Chen

Abstract:

This study aims to use CINPA1, a recently discovered small-molecule inhibitor of the xenobiotic receptor CAR (constitutive androstane receptor) for understanding the binding modes of CAR and to guide CAR-mediated gene expression profiling studies in human primary hepatocytes. CAR and PXR are xenobiotic sensors that respond to drugs and endobiotics by modulating the expression of metabolic genes that enhance detoxification and elimination. Elevated levels of drug metabolizing enzymes and efflux transporters resulting from CAR activation promote the elimination of chemotherapeutic agents leading to reduced therapeutic effectiveness. Multidrug resistance in tumors after chemotherapy could be associated with errant CAR activity, as shown in the case of neuroblastoma. CAR inhibitors used in combination with existing chemotherapeutics could be utilized to attenuate multidrug resistance and resensitize chemo-resistant cancer cells. CAR and PXR have many overlapping modulating ligands as well as many overlapping target genes which confounded attempts to understand and regulate receptor-specific activity. Through a directed screening approach we previously identified a new CAR inhibitor, CINPA1, which is novel in its ability to inhibit CAR function without activating PXR. The cellular mechanisms by which CINPA1 inhibits CAR function were also extensively examined along with its pharmacokinetic properties. CINPA1 binding was shown to change CAR-coregulator interactions as well as modify CAR recruitment at DNA response elements of regulated genes. CINPA1 was shown to be broken down in the liver to form two, mostly inactive, metabolites. The structure-activity differences of CINPA1 and its metabolites were used to guide computational modeling using the CAR-LBD structure. To rationalize how ligand binding may lead to different CAR pharmacology, an analysis of the docked poses of human CAR bound to CITCO (a CAR activator) vs. CINPA1 or the metabolites was conducted. From our modeling, strong hydrogen bonding of CINPA1 with N165 and H203 in the CAR-LBD was predicted. These residues were validated to be important for CINPA1 binding using single amino-acid CAR mutants in a CAR-mediated functional reporter assay. Also predicted were residues making key hydrophobic interactions with CINPA1 but not the inactive metabolites. Some of these hydrophobic amino acids were also identified and additionally, the differential coregulator interactions of these mutants were determined in mammalian two-hybrid systems. CINPA1 represents an excellent starting point for future optimization into highly relevant probe molecules to study the function of the CAR receptor in normal- and pathophysiology, and possible development of therapeutics (for e.g. use for resensitizing chemoresistant neuroblastoma cells).

Keywords: antagonist, chemoresistance, constitutive androstane receptor (CAR), multi-drug resistance, structure activity relationship (SAR), xenobiotic resistance

Procedia PDF Downloads 275
733 Analysis of Stress and Strain in Head Based Control of Cooperative Robots through Tetraplegics

Authors: Jochen Nelles, Susanne Kohns, Julia Spies, Friederike Schmitz-Buhl, Roland Thietje, Christopher Brandl, Alexander Mertens, Christopher M. Schlick

Abstract:

Industrial robots as part of highly automated manufacturing are recently developed to cooperative (light-weight) robots. This offers the opportunity of using them as assistance robots and to improve the participation in professional life of disabled or handicapped people such as tetraplegics. Robots under development are located within a cooperation area together with the working person at the same workplace. This cooperation area is an area where the robot and the working person can perform tasks at the same time. Thus, working people and robots are operating in the immediate proximity. Considering the physical restrictions and the limited mobility of tetraplegics, a hands-free robot control could be an appropriate approach for a cooperative assistance robot. To meet these requirements, the research project MeRoSy (human-robot synergy) develops methods for cooperative assistance robots based on the measurement of head movements of the working person. One research objective is to improve the participation in professional life of people with disabilities and, in particular, mobility impaired persons (e.g. wheelchair users or tetraplegics), whose participation in a self-determined working life is denied. This raises the research question, how a human-robot cooperation workplace can be designed for hands-free robot control. Here, the example of a library scenario is demonstrated. In this paper, an empirical study that focuses on the impact of head movement related stress is presented. 12 test subjects with tetraplegia participated in the study. Tetraplegia also known as quadriplegia is the worst type of spinal cord injury. In the experiment, three various basic head movements were examined. Data of the head posture were collected by a motion capture system; muscle activity was measured via surface electromyography and the subjective mental stress was assessed via a mental effort questionnaire. The muscle activity was measured for the sternocleidomastoid (SCM), the upper trapezius (UT) or trapezius pars descendens, and the splenius capitis (SPL) muscle. For this purpose, six non-invasive surface electromyography sensors were mounted on the head and neck area. An analysis of variance shows differentiated muscular strains depending on the type of head movement. Systematically investigating the influence of different basic head movements on the resulting strain is an important issue to relate the research results to other scenarios. At the end of this paper, a conclusion will be drawn and an outlook of future work will be presented.

Keywords: assistance robot, human-robot interaction, motion capture, stress-strain-concept, surface electromyography, tetraplegia

Procedia PDF Downloads 309
732 Enhancing Teaching of Engineering Mathematics

Authors: Tajinder Pal Singh

Abstract:

Teaching of mathematics to engineering students is an open ended problem in education. The main goal of mathematics learning for engineering students is the ability of applying a wide range of mathematical techniques and skills in their engineering classes and later in their professional work. Most of the undergraduate engineering students and faculties feels that no efforts and attempts are made to demonstrate the applicability of various topics of mathematics that are taught thus making mathematics unavoidable for some engineering faculty and their students. The lack of understanding of concepts in engineering mathematics may hinder the understanding of other concepts or even subjects. However, for most undergraduate engineering students, mathematics is one of the most difficult courses in their field of study. Most of the engineering students never understood mathematics or they never liked it because it was too abstract for them and they could never relate to it. A right balance of application and concept based teaching can only fulfill the objectives of teaching mathematics to engineering students. It will surely improve and enhance their problem solving and creative thinking skills. In this paper, some practical (informal) ways of making mathematics-teaching application based for the engineering students is discussed. An attempt is made to understand the present state of teaching mathematics in engineering colleges. The weaknesses and strengths of the current teaching approach are elaborated. Some of the causes of unpopularity of mathematics subject are analyzed and a few pragmatic suggestions have been made. Faculty in mathematics courses should spend more time discussing the applications as well as the conceptual underpinnings rather than focus solely on strategies and techniques to solve problems. They should also introduce more ‘word’ problems as these problems are commonly encountered in engineering courses. Overspecialization in engineering education should not occur at the expense of (or by diluting) mathematics and basic sciences. The role of engineering education is to provide the fundamental (basic) knowledge and to teach the students simple methodology of self-learning and self-development. All these issues would be better addressed if mathematics and engineering faculty join hands together to plan and design the learning experiences for the students who take their classes. When faculties stop competing against each other and start competing against the situation, they will perform better. Without creating any administrative hassles these suggestions can be used by any young inexperienced faculty of mathematics to inspire engineering students to learn engineering mathematics effectively.

Keywords: application based learning, conceptual learning, engineering mathematics, word problem

Procedia PDF Downloads 228
731 Embedding Employability in the Curriculum: Experiences from New Zealand

Authors: Narissa Lewis, Susan Geertshuis

Abstract:

The global and national employability agenda is changing the higher education landscape as academic staff are faced with the responsibility of developing employability capabilities and attributes in addition to delivering discipline specific content and skills. They realise that the shift towards teaching sustainable capabilities means a shift in the way they teach. But what that shift should be or how they should bring it about is unclear. As part of a national funded project, representatives from several New Zealand (NZ) higher education institutions and the NZ Association of Graduate Employers partnered to discover, trial and disseminate means of embedding employability in the curriculum. Findings from four focus groups (n=~75) and individual interviews (n=20) with staff from several NZ higher education institutions identified factors that enable or hinder embedded employability development within their respective institutions. Participants believed that higher education institutions have a key role in developing graduates for successful lives and careers however this requires a significant shift in culture within their respective institutions. Participants cited three main barriers: lack of strategic direction, support and guidance; lack of understanding and awareness of employability; and lack of resourcing and staff capability. Without adequate understanding and awareness of employability, participants believed it is difficult to understand what employability is let alone how it can be embedded in the curriculum. This presentation will describe some of the impacts that the employability agenda has on staff as they try to move from traditional to contemporary forms of teaching to develop employability attributes of students. Changes at the institutional level are required to support contemporary forms of teaching, however this is often beyond the sphere of influence at the teaching staff level. The study identified that small changes to teaching practices were necessary and a simple model to facilitate change from traditional to contemporary forms of teaching was developed. The model provides a framework to identify small but impactful teaching practices and exemplar teaching practices were identified. These practices were evaluated for transferability into other contexts to encourage small but impactful changes to embed employability in the curriculum.

Keywords: curriculum design, change management, employability, teaching exemplars

Procedia PDF Downloads 324
730 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 183
729 Mn3O4 anchored Broccoli-Flower like Nickel Manganese Selenide Composite for Ultra-efficient Solid-State Hybrid Supercapacitors with Extended Durability

Authors: Siddhant Srivastav, Shilpa Singh, Sumanta Kumar Meher

Abstract:

Innovative renewable energy sources for energy storage/conversion is the demand of the current scenario in electrochemical machinery. In this context, choosing suitable organic precipitants for tuning the crystal characteristics and microstructures is a challenge. On the same note, herein we report broccoli flower-like porous Mn3O4/NiSe2−MnSe2 composite synthesized using a simple two step hydrothermal synthesis procedure assisted by sluggish precipitating agent and an effective cappant followed by intermediated anion exchange. The as-synthesized material was exposed to physical and chemical measurements depicting poly-crystallinity, stronger bonding and broccoli flower-like porous arrangement. The material was assessed electrochemically by cyclic voltammetry (CV), chronopotentiometry (CP) and electrochemical impedance spectroscopy (EIS) measurements. The Electrochemical studies reveal redox behavior, supercapacitive charge-discharge shape and extremely low charge transfer resistance. Further, the fabricated Mn3O4/NiSe2−MnSe2 composite based solid-state hybrid supercapacitor (Mn3O4/NiSe2−MnSe2 ||N-rGO) delivers excellent rate specific capacity, very low internal resistance, with energy density (~34 W h kg–1) of a typical rechargeable battery and power density (11995 W kg–1) of an ultra-supercapacitor. Consequently, it can be a favorable contender for supercapacitor applications for high performance energy storage utilizations. A definitive exhibition of the supercapacitor device is credited to electrolyte-ion buffering reservior alike behavior of broccoli flower like Mn3O4/NiSe2−MnSe2, enhanced by upgraded electronic and ionic conductivities of N- doped rGO (negative electrode) and PVA/KOH gel (electrolyte separator), respectively

Keywords: electrolyte-ion buffering reservoir, intermediated-anion exchange, solid-state hybrid supercapacitor, supercapacitive charge-dischargesupercapacitive charge-discharge

Procedia PDF Downloads 70
728 Temperature Dependence of the Optoelectronic Properties of InAs(Sb)-Based LED Heterostructures

Authors: Antonina Semakova, Karim Mynbaev, Nikolai Bazhenov, Anton Chernyaev, Sergei Kizhaev, Nikolai Stoyanov

Abstract:

At present, heterostructures are used for fabrication of almost all types of optoelectronic devices. Our research focuses on the optoelectronic properties of InAs(Sb) solid solutions that are widely used in fabrication of light emitting diodes (LEDs) operating in middle wavelength infrared range (MWIR). This spectral range (2-6 μm) is relevant for laser diode spectroscopy of gases and molecules, for systems for the detection of explosive substances, medical applications, and for environmental monitoring. The fabrication of MWIR LEDs that operate efficiently at room temperature is mainly hindered by the predominance of non-radiative Auger recombination of charge carriers over the process of radiative recombination, which makes practical application of LEDs difficult. However, non-radiative recombination can be partly suppressed in quantum-well structures. In this regard, studies of such structures are quite topical. In this work, electroluminescence (EL) of LED heterostructures based on InAs(Sb) epitaxial films with the molar fraction of InSb ranging from 0 to 0.09 and multi quantum-well (MQW) structures was studied in the temperature range 4.2-300 K. The growth of the heterostructures was performed by metal-organic chemical vapour deposition on InAs substrates. On top of the active layer, a wide-bandgap InAsSb(Ga,P) barrier was formed. At low temperatures (4.2-100 K) stimulated emission was observed. As the temperature increased, the emission became spontaneous. The transition from stimulated emission to spontaneous one occurred at different temperatures for structures with different InSb contents in the active region. The temperature-dependent carrier lifetime, limited by radiative recombination and the most probable Auger processes (for the materials under consideration, CHHS and CHCC), were calculated within the framework of the Kane model. The effect of various recombination processes on the carrier lifetime was studied, and the dominant role of Auger processes was established. For MQW structures quantization energies for electrons, light and heavy holes were calculated. A characteristic feature of the experimental EL spectra of these structures was the presence of peaks with energy different from that of calculated optical transitions between the first quantization levels for electrons and heavy holes. The obtained results showed strong effect of the specific electronic structure of InAsSb on the energy and intensity of optical transitions in nanostructures based on this material. For the structure with MQWs in the active layer, a very weak temperature dependence of EL peak was observed at high temperatures (>150 K), which makes it attractive for fabricating temperature-resistant gas sensors operating in the middle-infrared range.

Keywords: Electroluminescence, InAsSb, light emitting diode, quantum wells

Procedia PDF Downloads 205
727 The Fabrication and Characterization of a Honeycomb Ceramic Electric Heater with a Conductive Coating

Authors: Siming Wang, Qing Ni, Yu Wu, Ruihai Xu, Hong Ye

Abstract:

Porous electric heaters, compared to conventional electric heaters, exhibit excellent heating performance due to their large specific surface area. Porous electric heaters employ porous metallic materials or conductive porous ceramics as the heating element. The former attains a low heating power with a fixed current due to the low electrical resistivity of metal. Although the latter can bypass the inherent challenges of porous metallic materials, the fabrication process of the conductive porous ceramics is complicated and high cost. This work proposed a porous ceramic electric heater with dielectric honeycomb ceramic as a substrate and surface conductive coating as a heating element. The conductive coating was prepared by the sol-gel method using silica sol and methyl trimethoxysilane as raw materials and graphite powder as conductive fillers. The conductive mechanism and degradation reason of the conductive coating was studied by electrical resistivity and thermal stability analysis. The heating performance of the proposed heater was experimentally investigated by heating air and deionized water. The results indicate that the electron transfer is achieved by forming the conductive network through the contact of the graphite flakes. With 30 wt% of graphite, the electrical resistivity of the conductive coating can be as low as 0.88 Ω∙cm. The conductive coating exhibits good electrical stability up to 500°C but degrades beyond 600°C due to the formation of many cracks in the coating caused by the weight loss and thermal expansion. The results also show that the working medium has a great influence on the volume power density of the heater. With air under natural convection as the working medium, the volume power density attains 640.85 kW/m3, which can be increased by 5 times when using deionized water as the working medium. The proposed honeycomb ceramic electric heater has the advantages of the simple fabrication method, low cost, and high volume power density, demonstrating great potential in the fluid heating field.

Keywords: conductive coating, honeycomb ceramic electric heater, high specific surface area, high volume power density

Procedia PDF Downloads 145
726 From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.

Keywords: agriculture 4.0, agri-food suppy chain, industry 4.0, voluntary traceability

Procedia PDF Downloads 141
725 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites

Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu

Abstract:

Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.

Keywords: graphene, graphene oxide, mechanical properties, dopping effect

Procedia PDF Downloads 309
724 Metallic and Semiconductor Thin Film and Nanoparticles for Novel Applications

Authors: Hanan. Al Chaghouri, Mohammad Azad Malik, P. John Thomas, Paul O’Brien

Abstract:

The process of assembling metal nanoparticles at the interface of two liquids has received a great interest over the past few years due to a wide range of important applications and their unusual properties compared to bulk materials. We present a low cost, simple and cheap synthesis of metal nanoparticles, core/shell structures and semiconductors followed by assembly of these particles between immiscible liquids. The aim of this talk is divided to three parts: firstly, to describe the achievement of a closed loop recycling for producing cadmium sulphide as powders and/or nanostructured thin films for solar cells or other optoelectronic devices applications by using a different chain length of commercially available secondary amines of dithiocarbamato complexes. The approach can be extended to other metal sulphides such as those of Zn, Pb, Cu, or Fe and many transition metals and oxides. Secondly, to synthesis significantly cheaper magnetic particles suited for the mass market. Ni/NiO nanoparticles with ferromagnetic properties at room temperature were among the smallest and strongest magnets (5 nm) were made in solution. The applications of this work can be applied to produce viable storage devices and the other possibility is to disperse these nanocrystals in solution and use it to make ferro-fluids which have a number of mature applications. The third part is about preparing and assembling of submicron silver, cobalt and nickel particles by using polyol methods and liquid/liquid interface, respectively. Noble metal like gold, copper and silver are suitable for plasmonic thin film solar cells because of their low resistivity and strong interactions with visible light waves. Silver is the best choice for solar cell application since it has low absorption losses and high radiative efficiency compared to gold and copper. Assembled cobalt and nickel as films are promising for spintronic, magnetic and magneto-electronic and biomedics.

Keywords: assembling nanoparticles, liquid/liquid interface, thin film, core/shell, solar cells, recording media

Procedia PDF Downloads 294
723 Academic Staff’s Perception and Willingness to Participate in Collaborative Research: Implication for Development in Sub-Saharan Africa

Authors: Ademola Ibukunolu Atanda

Abstract:

Research undertakings are meant to proffer solutions to issues and challenges in society. This justifies the need for research in ivory towers. Multinational and non-governmental organisations, as well as foundations, commit financial resources to support research endeavours. In recent times, the direction and dimension of research undertaking encourage collaborations, whereby experts from different disciplines or specializations would bring their expertise in addressing any identified problem, whether in humanities or sciences. However, the extent to which collaborative research undertakings are perceived and embraced by academic staff would determine the impact collaborative research would have on society. To this end, this study investigated academic staff’s perception and willingness to be involved in collaborative research for the purpose of proffering solutions to societal problems. The study adopted a descriptive research design. The population comprised academic staff in southern Nigeria. The sample was drawn through a convenient sampling technique. The data were collected using a questionnaire titled “Perception and Willingness to Participate in Collaborative Research Questionnaire (PWPCRQ)’ using Google Forms. Data collected were analyzed using descriptive statistics of simple percentages, mean and charts. The findings showed that Academic Staff’s readiness to participate in collaborative research is to a great extent (89%) and they participate in collaborative research very often (51%). The Academic Staff was involved more in collaboration research among their colleagues within their universities (1.98) than participation in inter-disciplines collaboration (1.47) with their colleagues outside Nigeria. Collaborative research was perceived to impact on development (2.5). Collaborative research offers the following benefits to members’ aggregation of views, the building of an extensive network of contacts, enhancement of sharing of skills, facilitation of tackling complex problems, increased visibility of research network and citations and promotion of funding opportunities. The study concluded that Academic staff in universities in the South-West of Nigeria participate in collaborative research but with their colleagues within Nigeria rather than outside the country. Based on the findings, it was recommended that the management of universities in South-West Nigeria should encourage collaborative research with some incentives.

Keywords: collaboration, research, development, participation

Procedia PDF Downloads 59
722 Unequal Contributions of Parental Isolates in Somatic Recombination of the Stripe Rust Fungus

Authors: Xianming Chen, Yu Lei, Meinan Wang

Abstract:

The dikaryotic basidiomycete fungus, Puccinia striiformis, causes stripe rust, one of the most important diseases of wheat and barley worldwide. The pathogen is largely reproduced asexually, and asexual recombination has been hypothesized to be one of the mechanisms for the pathogen variations. To test the hypothesis and understand the genetic process of asexual recombination, somatic recombinant isolates were obtained under controlled conditions by inoculating susceptible host plants with a mixture of equal quantity of urediniospores of isolates with different virulence patterns and selecting through a series of inoculation on host plants with different genes for resistance to one of the parental isolates. The potential recombinant isolates were phenotypically characterized by virulence testing on the set of 18 wheat lines used to differentiate races of the wheat stripe rust pathogen, P. striiformis f. sp. tritici (Pst), for the combinations of Pst isolates; or on both sets of the wheat differentials and 12 barley differentials for identifying races of the barley stripe rust pathogen, P. striiformis f. sp. hordei (Psh) for combinations of a Pst isolate and a Psh isolate. The progeny and parental isolates were also genotypically characterized with 51 simple sequence repeat and 90 single-nucleotide polymorphism markers. From nine combinations of parental isolates, 68 potential recombinant isolates were obtained, of which 33 (48.5%) had similar virulence patterns to one of the parental isolates, and 35 (51.5%) had virulence patterns distinct from either of the parental isolates. Of the 35 isolates of distinct virulence patterns, 11 were identified as races that had been previously detected from natural collections and 24 were identified as new races. The molecular marker data confirmed 66 of the 68 isolates as recombinants. The percentages of parental marker alleles ranged from 0.9% to 98.9% and were significantly different from equal proportions in the recombinant isolates. Except for a couple of combinations, the greater or less contribution was not specific to any particular parental isolates as the same parental isolates contributed more to some of the progeny isolates but less to the other progeny isolates in the same combination. The unequal contributions by parental isolates appear to be a general role in somatic recombination for the stripe rust fungus, which may be used to distinguish asexual recombination from sexual recombination in studying the evolutionary mechanisms of the highly variable fungal pathogen.

Keywords: molecular markers, Puccinia striiformis, somatic recombination, stripe rust

Procedia PDF Downloads 235
721 Improving the Weekend Handover in General Surgery: A Quality Improvement Project

Authors: Michael Ward, Eliana Kalakouti, Andrew Alabi

Abstract:

Aim: The handover process is recognized as a vulnerable step in the patient care pathway where errors are likely to occur. As such, it is a major preventable cause of patient harm due to human factors of poor communication and systematic error. The aim of this study was to audit the general surgery department’s weekend handover process compared to the recommended criteria for safe handover as set out by the Royal College of Surgeons (RCS). Method: A retrospective audit of the General Surgery department’s Friday patient lists and patient medical notes used for weekend handover in a London-based District General Hospital (DGH). Medical notes were analyzed against RCS's suggested criteria for handover. A standardized paper weekend handover proforma was then developed in accordance with guidelines and circulated in the department. A post-intervention audit was then conducted using the same methods for cycle 1. For cycle 2, we introduced an electronic weekend handover tool along with Electronic Patient Records (EPR). After a one-month period, a second post-intervention audit was conducted. Results: Following cycle 1, the paper weekend handover proforma was only used in 23% of patient notes. However, when it was used, 100% of them had a plan for the weekend, diagnosis and location but only 40% documented potential discharge status and 40% ceiling of care status. Qualitative feedback was that it was time-consuming to fill out. Better results were achieved following cycle 2, with 100% of patient notes having the electronic proforma. Results improved with every patient having documented ceiling of care, discharge status and location. Only 55% of patients had a past surgical history; however, this was still an increase when compared to paper proforma (45%). When comparing electronic versus paper proforma, there was an increase in documentation in every domain of the handover outlined by RCS with an average relative increase of 1.72 times (p<0.05). Qualitative feedback was that the autofill function made it easy to use and simple to view. Conclusion: These results demonstrate that the implementation of an electronic autofill handover proforma significantly improved handover compliance with RCS guidelines, thereby improving the transmission of information from week-day to weekend teams.

Keywords: surgery, handover, proforma, electronic handover, weekend, general surgery

Procedia PDF Downloads 151
720 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby

Authors: Jazim Sohail, Filipe Teixeira-Dias

Abstract:

Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.

Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI

Procedia PDF Downloads 210
719 Comparison of High Speed Railway Bride Foundation Design

Authors: Hussein Yousif Aziz

Abstract:

This paper discussed the design and analysis of bridge foundation subjected to load of train with three codes, namely AASHTO code, British Standard BS Code 8004 (1986), and Chinese code (TB10002.5-2005).The study focused on the design and analysis of bridge’s foundation manually with the three codes and found which code is better for design and controls the problem of high settlement due to the applied loads. The results showed the Chinese codes are costly that the number of reinforcement bars in the pile cap and piles is more than those with AASHTO code and BS code with the same dimensions. Settlement of the bridge was calculated depending on the data collected from the project site. The vertical ultimate bearing capacity of single pile for three codes is also discussed. Other analyses by using the two-dimensional Plaxis program and other programs like SAP2000 14, PROKON many parameters are calculated. The maximum values of the vertical displacement are close to the calculated ones. The results indicate that the AASHTO code is economics and safer in the bearing capacity of single pile. The purpose of this project is to study out the pier on the basis of the design of the pile foundation. There is a 32m simply supported beam of box section on top of the structure. The pier of bridge is round-type. The main component of the design is to calculate pile foundation and the settlement. According to the related data, we choose 1.0m in diameter bored pile of 48m. The pile is laid out in the rectangular pile cap. The dimension of the cap is 12m 9 m. Because of the interaction factors of pile groups, the load-bearing capacity of simple pile must be checked, the punching resistance of pile cap, the shearing strength of pile cap, and the part in bending of pile cap, all of them are very important to the structure stability. Also, checking soft sub-bearing capacity is necessary under the pile foundation. This project provides a deeper analysis and comparison about pile foundation design schemes. Firstly, here are brief instructions of the construction situation about the Bridge. With the actual construction geological features and the upper load on the Bridge, this paper analyzes the bearing capacity and settlement of single pile. In the paper the Equivalent Pier Method is used to calculate and analyze settlements of the piles.

Keywords: pile foundation, settlement, bearing capacity, civil engineering

Procedia PDF Downloads 415
718 Chemical Composition and Insecticidal Properties of Moroccan Plant Extracts against Dactylopius Opuntiae (Cockerell) Under Laboratory and Greenhouse Conditions

Authors: Imane Naboulsi, Mansour Sobeh, Rachid Lamzira, Karim El Fakhouri, Widad Ben Bakrim, Chaimae Ramdani, Rachid Boulamtat, Mustapha El Bouhssini, Jane ward, Abdelaziz Yasri, Aziz Aboulmouhajir

Abstract:

The wild cochineal Dactylopius opuntiae (Cockerell) (Hemiptera: Dactylopiidae) is the major insect pest of the prickly pear Opuntia ficus-indica (L.) in Morocco, which has causedenormous socio-economic and environmental losses to this crop in recent years. This study aimed to investigate the insecticidal potential of six aqueous (100% water), and methanolic (20/80 (v/v) MeOH/H2O) extracts obtained from aromatic and medicinal plants growing in arid and semi-arid regions of Morocco to control nymphs and adult females of D. opuntiae, under laboratory and greenhouse conditions. Under laboratory conditions, the aqueous extracts of Atriplex halimus at 5% caused significant mortality in nymphs with 71% four days after application and 88%on adult females of D. opuntiae8 days post-treatment. Under greenhouse conditions, the aqueous extract of A. halimus combined with black soap at 10 g/L showed the highest mortality rate of nymphs with 100%, 4 days after application. The adult females' mortality increased significantly to reach 83.75%,14 days after the second application of A. halimus aqueous extract at 5%. Phytochemical analysis of the water extract of A. halimus revealed a high content of saponins (24.09 ± 0.71 mg SSE/g DW) compared to other plant extracts, which was confirmed by LC-MS characterization that showed the presence of 36 triterpenoid saponin compounds (derived from oleic-12-en-28-oic acid), in addition to phytoecdysones, simple carboxylic acids, and flavonoids. These findings showed that using the aqueous extract of A. halimus as a biological pesticide could be incorporated into the management package to control the wild cochineal as a safe alternative to chemical insecticides.

Keywords: dactylopius opuntiae, opuntia ficus-indica L., plant extracts, toxicity, atriplex halimus, saponins

Procedia PDF Downloads 137
717 Fine Needle Aspiration Biopsy of Thyroid Nodules

Authors: Ilirian Laçi, Alketa Spahiu

Abstract:

Big strums of thyroid glandule observed by a simple viewing can be witnessed in everyday life. Medical cabinets evidence patients withpalpablenodes of thyroid glandule, mainly nodes of the size of 10 millimeters. Further, more cases which have resulted in negative under palpation have resulted in positive at ultrasound examination. Therefore, the use of ultrasound for diagnosing has increased the number of patients with nodes of thyroid glandule in the last couple of decades in all countries, Albania included. Thus, there has been evidence of an increased number of patients affected by this pathology, where female patients dominate. Demographically, the capital shows high numbers due to the high population, but of interest is the high incidence of those areas distanced from the sea. While regarding related pathologies, no significant link was evidenced, an element of ancestry was evident in the nodes of the thyroid glandule. When we talk of nodes of the thyroid glandule, we should consider hyperplasia, neoplasia, and inflammatory diseases that cause nodes of the thyroid glandule. This increase parallels the world’s increase of the incidence of thyroid glandule, with malign cases, which are at about 5% and are not depended on size. Given the numbers, with most thyroid glandule nodes being benign, the main objective of the examination of the nodes was the determination of benign and malign cases to avoid undue surgery. Subject of this study were 212 patients that underwent fine-needle aspiration (FNA) under ultrasound guidance at the Medical University Center of Tirana. All the patients came to the Mother Teresa University Hospital from public and private hospitals and other polyclinics. These patients had an ultrasound examination before visiting the Center of Nuclear Medicine for a scintigraph of thyroid glandule in the period September 2016 and September 2017. To correlate, all patients had been examined via ultrasound of the thyroid glandule prior to the scintigraph. The ultrasound included evaluation of the number of nodes, their size, their solid, cystic, or solid-cystic structure, echogenicity according to the gray scale, the presence of calcification, the presence of lymph nodes, the presence of adenopathy, and the correlation of the cytology results from the Laboratory of Pathological Anatomy of Medical University Center of Tirana.

Keywords: thyroid nodes, fine needle aspiration, ultrasound, scintigraphy

Procedia PDF Downloads 97
716 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator

Authors: Swati Swati, Yuhang Chen, Robert Reuben

Abstract:

Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.

Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement

Procedia PDF Downloads 273
715 A Study on the Performance Improvement of Zeolite Catalyst for Endothermic Reaction

Authors: Min Chang Shin, Byung Hun Jeong, Jeong Sik Han, Jung Hoon Park

Abstract:

In modern times, as flight speeds have increased due to improvements in aircraft and missile engine performance, thermal loads have also increased. Because of the friction heat of air flow with high speed on the surface of the vehicle, it is not easy to cool the superheat of the vehicle by the simple air cooling method. For this reason, a cooling method through endothermic heat is attracting attention by using a fuel that causes an endothermic reaction in a high-speed vehicle. There are two main ways of cooling the fuel through the endothermic reaction. The first is physical heat absorption. When the temperature rises, there is a sensible heat that accompanies it. The second is the heat of reaction corresponding to the chemical heat absorption, which absorbs heat during the fuel decomposes. Generally, since the decomposition reaction of the fuel proceeds at a high temperature, it does not achieve a great efficiency in cooling the high-speed flight body. However, when the catalyst is used, decomposition proceeds at a low temperature thereby increasing the cooling efficiency. However, when the catalyst is used as a powder, the catalyst enters the engine and damages the engine or the catalyst can deteriorate the performance due to the sintering. On the other hand, when used in the form of pellets, catalyst loss can be prevented. However, since the specific surface of pellet is small, the efficiency of the catalyst is low. And it can interfere with the flow of fuel, resulting in pressure loss and problems with fuel injection. In this study, we tried to maximize the performance of the catalyst by preparing a hollow fiber type pellet for zeolite ZSM-5, which has a higher amount of heat absorption, than other conventional pellets. The hollow fiber type pellet was prepared by phase inversion method. The hollow fiber type pellet has a finger-like pore and sponge-like pore. So it has a higher specific surface area than conventional pellets. The crystal structure of the prepared ZSM-5 catalyst was confirmed by XRD, and the characteristics of the catalyst were analyzed by TPD/TPR device. This study was conducted as part of the Basic Research Project (Pure-17-20) of Defense Acquisition Program Administration.

Keywords: catalyst, endothermic reaction, high-speed vehicle cooling, zeolite, ZSM-5

Procedia PDF Downloads 303
714 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 100