Search results for: symptom resolution time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18967

Search results for: symptom resolution time

17917 Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting

Authors: Ying Su, Morgan C. Wang

Abstract:

Long-term time series forecasting is an important research area for automated machine learning (AutoML). Currently, forecasting based on either machine learning or statistical learning is usually built by experts, and it requires significant manual effort, from model construction, feature engineering, and hyper-parameter tuning to the construction of the time series model. Automation is not possible since there are too many human interventions. To overcome these limitations, this article proposed to use recurrent neural networks (RNN) through the memory state of RNN to perform long-term time series prediction. We have shown that this proposed approach is better than the traditional Autoregressive Integrated Moving Average (ARIMA). In addition, we also found it is better than other network systems, including Fully Connected Neural Networks (FNN), Convolutional Neural Networks (CNN), and Nonpooling Convolutional Neural Networks (NPCNN).

Keywords: automated machines learning, autoregressive integrated moving average, neural networks, time series analysis

Procedia PDF Downloads 75
17916 Modeling Waiting and Service Time for Patients: A Case Study of Matawale Health Centre, Zomba, Malawi

Authors: Moses Aron, Elias Mwakilama, Jimmy Namangale

Abstract:

Spending more time on long queues for a basic service remains a common challenge to most developing countries, including Malawi. For health sector in particular, Out-Patient Department (OPD) experiences long queues. This puts the lives of patients at risk. However, using queuing analysis to under the nature of the problems and efficiency of service systems, such problems can be abated. Based on a kind of service, literature proposes different possible queuing models. However, unlike using generalized assumed models proposed by literature, use of real time case study data can help in deeper understanding the particular problem model and how such a model can vary from one day to the other and also from each case to another. As such, this study uses data obtained from one urban HC for BP, Pediatric and General OPD cases to investigate an average queuing time for patients within the system. It seeks to highlight the proper queuing model by investigating the kind of distributions functions over patient’s arrival time, inter-arrival time, waiting time and service time. Comparable with the standard set values by WHO, the study found that patients at this HC spend more waiting times than service times. On model investigation, different days presented different models ranging from an assumed M/M/1, M/M/2 to M/Er/2. As such, through sensitivity analysis, in general, a commonly assumed M/M/1 model failed to fit the data but rather an M/Er/2 demonstrated to fit well. An M/Er/3 model seemed to be good in terms of measuring resource utilization, proposing a need to increase medical personnel at this HC. However, an M/Er/4 showed to cause more idleness of human resources.

Keywords: health care, out-patient department, queuing model, sensitivity analysis

Procedia PDF Downloads 417
17915 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 120
17914 Experimental Study on Ultrasonic Shot Peening Forming and Surface Properties of AALY12

Authors: Shi-hong Lu, Chao-xun Liu, Yi-feng Zhu

Abstract:

Ultrasonic shot peening (USP) on AALY12 sheet was studied. Several parameters (arc heights, surface roughness, surface topography and microhardness) with different USP process parameters were measured. The research proposes that the radius of curvature of shot peened sheet increases with time and electric current decreasing, while it increases with pin diameter increasing, and radius of curvature reaches a saturation level after a specific processing time and electric current. An empirical model of the relationship between radius of curvature and pin diameter, electric current, time was also obtained. The research shows that the increment of surface and vertical microhardness of material is more obvious with longer time and higher value of electric current, which can be up to 20% and 28% respectively.

Keywords: USP forming, surface properties, radius of curvature, residual stress

Procedia PDF Downloads 505
17913 Time Lag Analysis for Readiness Potential by a Firing Pattern Controller Model of a Motor Nerve System Considered Innervation and Jitter

Authors: Yuko Ishiwaka, Tomohiro Yoshida, Tadateru Itoh

Abstract:

Human makes preparation called readiness potential unconsciously (RP) before awareness of their own decision. For example, when recognizing a button and pressing the button, the RP peaks are observed 200 ms before the initiation of the movement. It has been known that the preparatory movements are acquired before actual movements, but it has not been still well understood how humans can obtain the RP during their growth. On the proposition of why the brain must respond earlier, we assume that humans have to adopt the dangerous environment to survive and then obtain the behavior to cover the various time lags distributed in the body. Without RP, humans cannot take action quickly to avoid dangerous situations. In taking action, the brain makes decisions, and signals are transmitted through the Spinal Cord to the muscles to the body moves according to the laws of physics. Our research focuses on the time lag of the neuron signal transmitting from a brain to muscle via a spinal cord. This time lag is one of the essential factors for readiness potential. We propose a firing pattern controller model of a motor nerve system considered innervation and jitter, which produces time lag. In our simulation, we adopt innervation and jitter in our proposed muscle-skeleton model, because these two factors can create infinitesimal time lag. Q10 Hodgkin Huxley model to calculate action potentials is also adopted because the refractory period produces a more significant time lag for continuous firing. Keeping constant power of muscle requires cooperation firing of motor neurons because a refractory period stifles the continuous firing of a neuron. One more factor in producing time lag is slow or fast-twitch. The Expanded Hill Type model is adopted to calculate power and time lag. We will simulate our model of muscle skeleton model by controlling the firing pattern and discuss the relationship between the time lag of physics and neurons. For our discussion, we analyze the time lag with our simulation for knee bending. The law of inertia caused the most influential time lag. The next most crucial time lag was the time to generate the action potential induced by innervation and jitter. In our simulation, the time lag at the beginning of the knee movement is 202ms to 203.5ms. It means that readiness potential should be prepared more than 200ms before decision making.

Keywords: firing patterns, innervation, jitter, motor nerve system, readiness potential

Procedia PDF Downloads 808
17912 Investigate the Effects of Anionic Surfactant on THF Hydrate

Authors: Salah A. Al-Garyani, Yousef Swesi

Abstract:

Gas hydrates can be hazardous to upstream operations. On the other hand, the high gas storage capacity of hydrate may be utilized for natural gas storage and transport. Research on the promotion of hydrate formation, as related to natural gas storage and transport, has received relatively little attention. The primary objective of this study is to gain a better understanding of the effects of ionic surfactants, particularly their molecular structures and concentration, on the formation of tetrahydrofuran (THF) hydrate, which is often used as a model hydrate former for screening hydrate promoters or inhibitors. The surfactants studied were sodium n-dodecyl sulfate (SDS), sodium n-hexadecyl sulfate (SHS). Our results show that, at concentrations below the solubility limit, the induction time decreases with increasing surfactant concentration. At concentrations near or above the solubility, however, the surfactant concentration no longer has any effect on the induction time. These observations suggest that the effect of surfactant on THF hydrate formation is associated with surfactant monomers, not the formation of micelle as previously reported. The lowest induction time (141.25 ± 21 s, n = 4) was observed in a solution containing 7.5 mM SDS. The induction time decreases by a factor of three at concentrations near or above the solubility, compared to that without surfactant.

Keywords: tetrahydrofuran, hydrate, surfactant, induction time, monomers, micelle

Procedia PDF Downloads 394
17911 Decision-Making Under Uncertainty in Obsessive-Compulsive Disorder

Authors: Helen Pushkarskaya, David Tolin, Lital Ruderman, Ariel Kirshenbaum, J. MacLaren Kelly, Christopher Pittenger, Ifat Levy

Abstract:

Obsessive-Compulsive Disorder (OCD) produces profound morbidity. Difficulties with decision making and intolerance of uncertainty are prominent clinical features of OCD. The nature and etiology of these deficits are poorly understood. We used a well-validated choice task, grounded in behavioral economic theory, to investigate differences in valuation and value-based choice during decision making under uncertainty in 20 unmedicated participants with OCD and 20 matched healthy controls. Participants’ choices were used to assess individual decision-making characteristics. Compared to controls, individuals with OCD were less consistent in their choices and less able to identify options that were unambiguously preferable. These differences correlated with symptom severity. OCD participants did not differ from controls in how they valued uncertain options when outcome probabilities were known (risk) but were more likely than controls to avoid uncertain options when these probabilities were imprecisely specified (ambiguity). These results suggest that the underlying neural mechanisms of valuation and value-based choices during decision-making are abnormal in OCD. Individuals with OCD show elevated intolerance of uncertainty, but only when outcome probabilities are themselves uncertain. Future research focused on the neural valuation network, which is implicated in value-based computations, may provide new neurocognitive insights into the pathophysiology of OCD. Deficits in decision-making processes may represent a target for therapeutic intervention.

Keywords: obsessive compulsive disorder, decision-making, uncertainty intolerance, risk aversion, ambiguity aversion, valuation

Procedia PDF Downloads 598
17910 Basic Modal Displacements (BMD) for Optimizing the Buildings Subjected to Earthquakes

Authors: Seyed Sadegh Naseralavi, Mohsen Khatibinia

Abstract:

In structural optimizations through meta-heuristic algorithms, analyses of structures are performed for many times. For this reason, performing the analyses in a time saving way is precious. The importance of the point is more accentuated in time-history analyses which take much time. To this aim, peak picking methods also known as spectrum analyses are generally utilized. However, such methods do not have the required accuracy either done by square root of sum of squares (SRSS) or complete quadratic combination (CQC) rules. The paper presents an efficient technique for evaluating the dynamic responses during the optimization process with high speed and accuracy. In the method, first by using a static equivalent of the earthquake, an initial design is obtained. Then, the displacements in the modal coordinates are achieved. The displacements are herein called basic modal displacements (MBD). For each new design of the structure, the responses can be derived by well scaling each of the MBD along the time and amplitude and superposing them together using the corresponding modal matrices. To illustrate the efficiency of the method, an optimization problems is studied. The results show that the proposed approach is a suitable replacement for the conventional time history and spectrum analyses in such problems.

Keywords: basic modal displacements, earthquake, optimization, spectrum

Procedia PDF Downloads 342
17909 Cartographic Depiction and Visualization of Wetlands Changes in the North-Western States of India

Authors: Bansal Ashwani

Abstract:

Cartographic depiction and visualization of wetland changes is an important tool to map spatial-temporal information about the wetland dynamics effectively and to comprehend the response of these water bodies in maintaining the groundwater and surrounding ecosystem. This is true for the states of North Western India, i.e., J&K, Himachal, Punjab, and Haryana that are bestowed upon with several natural wetlands in the flood plains or on the courses of its rivers. Thus, the present study documents, analyses and reconstructs the lost wetlands, which existed in the flood plains of the major river basins of these states, i.e., Chenab, Jhelum, Satluj, Beas, Ravi, and Ghagar, in the beginning of the 20th century. To achieve the objective, the study has used multi-temporal datasets since the 1960s using high to medium resolution satellite datasets, e.g., Corona (1960s/70s), Landsat (1990s-2017) and Sentinel (2017). The Sentinel (2017) satellite image has been used for making the wetland inventory owing to its comparatively higher spatial resolution with multi-spectral bands. In addition, historical records, repeated photographs, historical maps, field observations including geomorphological evidence were also used. The water index techniques, i.e., band rationing, normalized difference water index (NDWI), modified NDWI (MNDWI) have been compared and used to map the wetlands. The wetland types found in the north-western states have been categorized under 19 classes suggested by Space Application Centre, India. These enable the researcher to provide with the wetlands inventory and a series of cartographic representation that includes overlaying multiple temporal wetlands extent vectors. A preliminary result shows the general state of wetland shrinkage since the 1960s with varying area shrinkage rate from one wetland to another. In addition, it is observed that majority of wetlands have not been documented so far and even do not have names. Moreover, the purpose is to emphasize their elimination in addition to establishing a baseline dataset that can be a tool for wetland planning and management. Finally, the applicability of cartographic depiction and visualization, historical map sources, repeated photographs and remote sensing data for reconstruction of long term wetlands fluctuations, especially in the northern part of India, will be addressed.

Keywords: cartographic depiction and visualization, wetland changes, NDWI/MDWI, geomorphological evidence and remote sensing

Procedia PDF Downloads 245
17908 The Legal Nature of Grading Decisions and the Implications for Handling of Academic Complaints in or out of Court: A Comparative Legal Analysis of Academic Litigation in Europe

Authors: Kurt Willems

Abstract:

This research examines complaints against grading in higher education institutions in four different European regions: England and Wales, Flanders, the Netherlands, and France. The aim of the research is to examine the correlation between the applicable type of complaint handling on the one hand, and selected qualities of the higher education landscape and of public law on the other hand. All selected regions report a rising number of complaints against grading decisions, not only as to internal complaint handling within the institution but also judicially if the dispute persists. Some regions deem their administrative court system appropriate to deal with grading disputes (France) or have even erected a specialty administrative court to facilitate access (Flanders, the Netherlands). However, at the same time, different types of (governmental) dispute resolution bodies have been established outside of the judicial court system (England and Wales, and to lesser extent France and the Netherlands). Those dispute procedures do not seem coincidental. Public law issues such as the underlying legal nature of the education institution and, eventually, the grading decision itself, have an impact on the way the academic complaint procedures are developed. Indeed, in most of the selected regions, contractual disputes enjoy different legal protection than administrative decisions, making the legal qualification of the relationship between student and higher education institution highly relevant. At the same time, the scope of competence of government over different types of higher education institutions; albeit direct or indirect (o.a. through financing and quality control) is relevant as well to comprehend why certain dispute handling procedures have been established for students. To answer the above questions, the doctrinal and comparative legal method is used. The normative framework is distilled from the relevant national legislative rules and their preparatory texts, the legal literature, the (published) case law of academic complaints and the available governmental reports. The research is mainly theoretical in nature, examining different topics of public law (mainly administrative law) and procedural law in the context of grading decisions. The internal appeal procedure within the education institution is largely left out of the scope of the research, as well as different types of non-governmental-imposed cooperation between education institutions, given the public law angle of the research questions. The research results in the categorization of different academic complaint systems, and an analysis of the possibility to introduce each of those systems in different countries, depending on their public law system and higher education system. By doing so, the research also adds to the debate on the public-private divide in higher education systems, and its effect on academic complaints handling.

Keywords: higher education, legal qualification of education institution, legal qualification of grading decisions, legal protection of students, academic litigation

Procedia PDF Downloads 214
17907 An IM-COH Algorithm Neural Network Optimization with Cuckoo Search Algorithm for Time Series Samples

Authors: Wullapa Wongsinlatam

Abstract:

Back propagation algorithm (BP) is a widely used technique in artificial neural network and has been used as a tool for solving the time series problems, such as decreasing training time, maximizing the ability to fall into local minima, and optimizing sensitivity of the initial weights and bias. This paper proposes an improvement of a BP technique which is called IM-COH algorithm (IM-COH). By combining IM-COH algorithm with cuckoo search algorithm (CS), the result is cuckoo search improved control output hidden layer algorithm (CS-IM-COH). This new algorithm has a better ability in optimizing sensitivity of the initial weights and bias than the original BP algorithm. In this research, the algorithm of CS-IM-COH is compared with the original BP, the IM-COH, and the original BP with CS (CS-BP). Furthermore, the selected benchmarks, four time series samples, are shown in this research for illustration. The research shows that the CS-IM-COH algorithm give the best forecasting results compared with the selected samples.

Keywords: artificial neural networks, back propagation algorithm, time series, local minima problem, metaheuristic optimization

Procedia PDF Downloads 131
17906 Impact of Hybrid Optical Amplifiers on 16 Channel Wavelength Division Multiplexed System

Authors: Inderpreet Kaur, Ravinder Pal Singh, Kamal Kant Sharma

Abstract:

This paper addresses the different configurations used of optical amplifiers with 16 channels in Wavelength Division Multiplexed system. The systems with 16 channels have been simulated for evaluation of various parameters; Bit Error Rate, Quality Factor, for threshold values for a range of wavelength from 1471 nm to 1611 nm. Comparison of various combination of configurations have been analyzed with EDFA and FRA but EDFA-FRA configuration performance has been found satisfactory in terms of performance indices and stable region. The paper also compared various parameters quantized with different configurations individually. It has been found that Q factor has high value with less value of BER and high resolution for EDFA-FRA configuration.

Keywords: EDFA, FRA, WDM, Q factor, BER

Procedia PDF Downloads 336
17905 Performance Evaluation and Kinetics of Artocarpus heterophyllus Seed for the Purification of Paint Industrial Wastewater by Coagulation-Flocculation Process

Authors: Ifeoma Maryjane Iloamaeke, Kelvin Obazie, Mmesoma Offornze, Chiamaka Marysilvia Ifeaghalu, Cecilia Aduaka, Ugomma Chibuzo Onyeije, Claudine Ifunanaya Ogu, Ngozi Anastesia Okonkwo

Abstract:

This work investigated the effects of pH, settling time, and coagulant dosages on the removal of color, turbidity, and heavy metals from paint industrial wastewater using the seed of Artocarpus heterophyllus (AH) by the coagulation-flocculation process. The paint effluent was physicochemically characterized, while AH coagulant was instrumentally characterized by Scanning Electron Microscope (SEM), Fourier Transform Infrared (FTIR), and X-ray diffraction (XRD). A Jar test experiment was used for the coagulation-flocculation process. The result showed that paint effluent was polluted with color, turbidity (36000 NTU), mercury (1.392 mg/L), lead (0.252 mg/L), arsenic (1.236 mg/L), TSS (63.40mg/L), and COD (121.70 mg/L). The maximum color removal efficiency was 94.33% at the dosage of 0.2 g/L, pH 2 at a constant time of 50 mins, and 74.67% at constant pH 2, coagulant dosage of 0.2 g/L and 50 mins. The highest turbidity removal efficiency was 99.94% at 0.2 g/L and 50 mins at constant pH 2 and 96.66% at pH 2 and 0.2 g/L at constant time of 50 mins. The mercury removal efficiency of 99.29% was achieved at the optimal condition of 0.8 g/L coagulant dosage, pH 8, and constant time of 50 mins and 99.57% at coagulant dosage of 0.8 g/L, time of 50 mins constant pH 8. The highest lead removal efficiency was 99.76% at a coagulant dosage of 10 g/L, time of 40 mins at constant pH 10, and 96.53% at pH 10, coagulant dosage of 10 g/L and constant time of 40 mins. For arsenic, the removal efficiency is 75.24 % at 0.8 g/L coagulant dosage, time of 40 mins, and constant pH of 8. XRD imaging before treatment showed that Artocarpus heterophyllus coagulant was crystalline and changed to amorphous after treatment. The SEM and FTIR results of the AH coagulant and sludge suggested there were changes in the surface morphology and functional groups before and after treatment. The reaction kinetics were modeled best in the second order.

Keywords: Artocarpus heterophyllus, coagulation-flocculation, coagulant dosages, setting time, paint effluent

Procedia PDF Downloads 79
17904 The Roles of ECOWAS Parliament on Regional Integration of the West African Sub-Region

Authors: Sani Shehu, Mohd Afandi Salleh

Abstract:

Parliament is a law making body which provided at national, state, province and territorial level playing a parliamentary role of representing people, law making, peace, and conflict resolution, ratifying and incorporating international convention into municipal law. Parliaments are created globally to give solid legitimacy to good governance under democratic system of government, and the representatives must be elected by the people, so the ECOWAS parliament is entitled to have this legitimacy, where members must be elected by adult people among the citizens of ECOWAS member states. This paper will discuss on the roles that ECOWAS parliament plays for the achievement of regional integration and economic goals of development and cooperation in the sub-region.

Keywords: ECOWAS parliament, composition, competence, power

Procedia PDF Downloads 461
17903 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale

Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize

Abstract:

Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.

Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy

Procedia PDF Downloads 74
17902 Response of Chickpea (Cicer arietinum L.) Genotypes to Drought Stress at Different Growth Stages

Authors: Ali. Marjani, M. Farsi, M. Rahimizadeh

Abstract:

Chickpea (Cicer arietinum L.) is one of the important grain legume crops in the world. However, drought stress is a serious threat to chickpea production, and development of drought-resistant varieties is a necessity. Field experiments were conducted to evaluate the response of 8 chickpea genotypes (MCC* 696, 537, 80, 283, 392, 361, 252, 397) and drought stress (S1: non-stress, S2: stress at vegetative growth stage, S3: stress at early bloom, S4: stress at early pod visible) at different growth stages. Experiment was arranged in split plot design with four replications. Difference among the drought stress time was found to be significant for investigated traits except biological yield. Differences were observed for genotypes in flowering time, pod information time, physiological maturation time and yield. Plant height reduced due to drought stress in vegetative growth stage. Stem dry weight reduced due to drought stress in pod visibly. Flowering time, maturation time, pod number, number of seed per plant and yield cause of drought stress in flowering was also reduced. The correlation between yield and number of seed per plant and biological yield was positive. The MCC283 and MCC696 were the high-tolerance genotypes. These results demonstrated that drought stress delayed phonological growth in chickpea and that flowering stage is sensitive.

Keywords: chickpea, drought stress, growth stage, tolerance

Procedia PDF Downloads 245
17901 The Effect of Adolescents’ Grit on Stem Creativity: The Mediation of Creative Self-Efficacy and the Moderation of Future Time Perspective

Authors: Han Kuikui

Abstract:

Adolescents, serving as the reserve force for technological innovation talents, possess STEM creativity that is not only pivotal to achieving STEM education goals but also provides a viable path for reforming science curricula in compulsory education and cultivating innovative talents in China. To investigate the relationship among adolescents' grit, creative self-efficacy, future time perspective, and STEM creativity, a survey was conducted in 2023 using stratified random sampling. A total of 1263 junior high school students from the main urban areas of Chongqing, from grade 7 to grade 9, were sampled. The results indicated that (1) Grit positively predicts adolescents' creative self-efficacy and STEM creativity significantly; (2) Creative self-efficacy mediates the positive relationship between grit and adolescents' STEM creativity; (3) The mediating role of creative self-efficacy is moderated by future time perspective, such that with a higher future time perspective, the positive predictive effect of grit on creative self-efficacy is more substantial, which in turn positively affects their STEM creativity.

Keywords: grit, stem creativity, creative self-efficacy, future time perspective

Procedia PDF Downloads 38
17900 Restoration of Railway Turnout Frog with FCAW

Authors: D. Sergejevs, A. Tipainis, P. Gavrilovs

Abstract:

Railway turnout frogs restored with MMA often have such defects as infusions, pores, a.o., which under the influence of dynamic forces cause premature destruction of the restored surfaces. To prolong the operational time of turnout frog, i.e. operational time of the restored surface, turnout frog was restored using FCAW and afterwards matallographic examination was performed. Experimental study revealed that railway turnout frog restored with FCAW had better quality than elements restored with MMA, furthermore it provided considerable time economy.

Keywords: elements of railway turnout, FCAW, metallographic examination, quality of build-up welding

Procedia PDF Downloads 624
17899 INCIPIT-CRIS: A Research Information System Combining Linked Data Ontologies and Persistent Identifiers

Authors: David Nogueiras Blanco, Amir Alwash, Arnaud Gaudinat, René Schneider

Abstract:

At a time when the access to and the sharing of information are crucial in the world of research, the use of technologies such as persistent identifiers (PIDs), Current Research Information Systems (CRIS), and ontologies may create platforms for information sharing if they respond to the need of disambiguation of their data by assuring interoperability inside and between other systems. INCIPIT-CRIS is a continuation of the former INCIPIT project, whose goal was to set up an infrastructure for a low-cost attribution of PIDs with high granularity based on Archival Resource Keys (ARKs). INCIPIT-CRIS can be interpreted as a logical consequence and propose a research information management system developed from scratch. The system has been created on and around the Schema.org ontology with a further articulation of the use of ARKs. It is thus built upon the infrastructure previously implemented (i.e., INCIPIT) in order to enhance the persistence of URIs. As a consequence, INCIPIT-CRIS aims to be the hinge between previously separated aspects such as CRIS, ontologies and PIDs in order to produce a powerful system allowing the resolution of disambiguation problems using a combination of an ontology such as Schema.org and unique persistent identifiers such as ARK, allowing the sharing of information through a dedicated platform, but also the interoperability of the system by representing the entirety of the data as RDF triplets. This paper aims to present the implemented solution as well as its simulation in real life. We will describe the underlying ideas and inspirations while going through the logic and the different functionalities implemented and their links with ARKs and Schema.org. Finally, we will discuss the tests performed with our project partner, the Swiss Institute of Bioinformatics (SIB), by the use of large and real-world data sets.

Keywords: current research information systems, linked data, ontologies, persistent identifier, schema.org, semantic web

Procedia PDF Downloads 112
17898 Extraction of Urban Building Damage Using Spectral, Height and Corner Information

Authors: X. Wang

Abstract:

Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.

Keywords: building damage, corner, earthquake, height, very high resolution (VHR)

Procedia PDF Downloads 197
17897 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach

Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani

Abstract:

This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.

Keywords: chaotic approach, phase space, Cao method, local linear approximation method

Procedia PDF Downloads 311
17896 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 273
17895 Long-Term Trends of Sea Level and Sea Surface Temperature in the Mediterranean Sea

Authors: Bayoumy Mohamed, Khaled Alam El-Din

Abstract:

In the present study, 24 years of gridded sea level anomalies (SLA) from satellite altimetry and sea surface temperature (SST) from advanced very-high-resolution radiometer (AVHRR) daily data (1993-2016) are used. These data have been used to investigate the sea level rising and warming rates of SST, and their spatial distribution in the Mediterranean Sea. The results revealed that there is a significant sea level rise in the Mediterranean Sea of 2.86 ± 0.45 mm/year together with a significant warming of 0.037 ± 0.007 °C/year. The high spatial correlation between sea level and SST variations suggests that at least part of the sea level change reported during the period of study was due to heating of surface layers. This indicated that the steric effect had a significant influence on sea level change in the Mediterranean Sea.

Keywords: altimetry, AVHRR, Mediterranean Sea, sea level and SST changes, trend analysis

Procedia PDF Downloads 178
17894 Dynamic Degradation Mechanism of SiC VDMOS under Proton Irradiation

Authors: Junhong Feng, Wenyu Lu, Xinhong Cheng, Li Zheng, Yuehui Yu

Abstract:

The effects of proton irradiation on the properties of gate oxide were evaluated by monitoring the static parameters (such as threshold voltage and on-resistance) and dynamic parameters (Miller plateau time) of 1700V SiC VDMOS before and after proton irradiation. The incident proton energy was 3MeV, and the doses were 5 × 10¹² P / cm², 1 × 10¹³ P / cm², respectively. The results show that the threshold voltage of MOS exhibits negative drift under proton irradiation, and the near-interface traps in the gate oxide layer are occupied by holes generated by the ionization effect of irradiation, thus forming more positive charges. The basis for selecting TMiller is that the change time of Vgs is the time when Vds just shows an upward trend until it rises to a stable value. The degradation of the turn-off time of the Miller platform verifies that the capacitance Cgd becomes larger, reflecting that the gate oxide layer is introduced into the trap by the displacement effect caused by proton irradiation, and the interface state deteriorates. As a more sensitive area in the irradiation process, the gate oxide layer will be optimized for its parameters (such as thickness, type, etc.) in subsequent studies.

Keywords: SiC VDMOS, proton radiation, Miller time, gate oxide

Procedia PDF Downloads 70
17893 Topological Quantum Diffeomorphisms in Field Theory and the Spectrum of the Space-Time

Authors: Francisco Bulnes

Abstract:

Through the Fukaya conjecture and the wrapped Floer cohomology, the correspondences between paths in a loop space and states of a wrapping space of states in a Hamiltonian space (the ramification of field in this case is the connection to the operator that goes from TM to T*M) are demonstrated where these last states are corresponding to bosonic extensions of a spectrum of the space-time or direct image of the functor Spec, on space-time. This establishes a distinguished diffeomorphism defined by the mapping from the corresponding loops space to wrapping category of the Floer cohomology complex which furthermore relates in certain proportion D-branes (certain D-modules) with strings. This also gives to place to certain conjecture that establishes equivalences between moduli spaces that can be consigned in a moduli identity taking as space-time the Hitchin moduli space on G, whose dual can be expressed by a factor of a bosonic moduli spaces.

Keywords: Floer cohomology, Fukaya conjecture, Lagrangian submanifolds, quantum topological diffeomorphism

Procedia PDF Downloads 292
17892 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features

Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou

Abstract:

The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.

Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features

Procedia PDF Downloads 103
17891 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique

Authors: Stefano Iannello, Massimiliano Materazzi

Abstract:

Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.

Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray

Procedia PDF Downloads 154
17890 Hamilton-Jacobi Treatment of Damped Motion

Authors: Khaled I. Nawafleh

Abstract:

In this work, we apply the method of Hamilton-Jacobi to obtain solutions of Hamiltonian systems in classical mechanics with two certain structures: the first structure plays a central role in the theory of time-dependent Hamiltonians, whilst the second is used to treat classical Hamiltonians, including dissipation terms. It is proved that the generalization of problems from the calculus of variation methods in the nonstationary case can be obtained naturally in Hamilton-Jacobi formalism. Then, another expression of geometry of the Hamilton Jacobi equation is retrieved for Hamiltonians with time-dependent and frictional terms. Both approaches shall be applied to many physical examples.

Keywords: Hamilton-Jacobi, time dependent lagrangians, dissipative systems, variational principle

Procedia PDF Downloads 150
17889 Time Domain Dielectric Relaxation Microwave Spectroscopy

Authors: A. C. Kumbharkhane

Abstract:

Time domain dielectric relaxation microwave spectroscopy (TDRMS) is a term used to describe a technique of observing the time dependant response of a sample after application of time dependant electromagnetic field. A TDRMS probes the interaction of a macroscopic sample with a time dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the TDRS technique covers an extensive dynamical process. The corresponding frequencies range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy, which yield information on the motions of individual molecules. Recently, we have developed and established the TDR technique in laboratory that provides information regarding dielectric permittivity in the frequency range 10 MHz to 30 GHz. The TDR method involves the generation of step pulse with rise time of 20 pico-seconds in a coaxial line system and monitoring the change in pulse shape after reflection from the sample placed at the end of the coaxial line. There is a great interest to study the dielectric relaxation behaviour in liquid systems to understand the role of hydrogen bond in liquid system. The intermolecular interaction through hydrogen bonds in molecular liquids results in peculiar dynamical properties. The dynamics of hydrogen-bonded liquids have been studied. The theoretical model to explain the experimental results will be discussed.

Keywords: microwave, time domain reflectometry (TDR), dielectric measurement, relaxation time

Procedia PDF Downloads 320
17888 Spectral Mixture Model Applied to Cannabis Parcel Determination

Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara

Abstract:

Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.

Keywords: Gaussian mixture discriminant analysis, spectral mixture model, Worldview-2, land parcels

Procedia PDF Downloads 180