Search results for: multiple input multiple output
7404 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia
Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza
Abstract:
In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant
Procedia PDF Downloads 4677403 Interference among Lambsquarters and Oil Rapeseed Cultivars
Authors: Reza Siyami, Bahram Mirshekari
Abstract:
Seed and oil yield of rapeseed is considerably affected by weeds interference including mustard (Sinapis arvensis L.), lambsquarters (Chenopodium album L.) and redroot pigweed (Amaranthus retroflexus L.) throughout the East Azerbaijan province in Iran. To formulate the relationship between four independent growth variables measured in our experiment with a dependent variable, multiple regression analysis was carried out for the weed leaves number per plant (X1), green cover percentage (X2), LAI (X3) and leaf area per plant (X4) as independent variables and rapeseed oil yield as a dependent variable. The multiple regression equation is shown as follows: Seed essential oil yield (kg/ha) = 0.156 + 0.0325 (X1) + 0.0489 (X2) + 0.0415 (X3) + 0.133 (X4). Furthermore, the stepwise regression analysis was also carried out for the data obtained to test the significance of the independent variables affecting the oil yield as a dependent variable. The resulted stepwise regression equation is shown as follows: Oil yield = 4.42 + 0.0841 (X2) + 0.0801 (X3); R2 = 81.5. The stepwise regression analysis verified that the green cover percentage and LAI of weed had a marked increasing effect on the oil yield of rapeseed.Keywords: green cover percentage, independent variable, interference, regression
Procedia PDF Downloads 4207402 Knowledge Representation Based on Interval Type-2 CFCM Clustering
Authors: Lee Myung-Won, Kwak Keun-Chang
Abstract:
This paper is concerned with knowledge representation and extraction of fuzzy if-then rules using Interval Type-2 Context-based Fuzzy C-Means clustering (IT2-CFCM) with the aid of fuzzy granulation. This proposed clustering algorithm is based on information granulation in the form of IT2 based Fuzzy C-Means (IT2-FCM) clustering and estimates the cluster centers by preserving the homogeneity between the clustered patterns from the IT2 contexts produced in the output space. Furthermore, we can obtain the automatic knowledge representation in the design of Radial Basis Function Networks (RBFN), Linguistic Model (LM), and Adaptive Neuro-Fuzzy Networks (ANFN) from the numerical input-output data pairs. We shall focus on a design of ANFN in this paper. The experimental results on an estimation problem of energy performance reveal that the proposed method showed a good knowledge representation and performance in comparison with the previous works.Keywords: IT2-FCM, IT2-CFCM, context-based fuzzy clustering, adaptive neuro-fuzzy network, knowledge representation
Procedia PDF Downloads 3217401 Reinforcement Learning for Self Driving Racing Car Games
Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh
Abstract:
This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.Keywords: reinforcement learning, soft actor-critic, deep q-network, self-driving cars, artificial intelligence, gaming
Procedia PDF Downloads 467400 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs
Authors: Anika Chebrolu
Abstract:
Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.Keywords: drug design, multitargeticity, de-novo, reinforcement learning
Procedia PDF Downloads 977399 In-Context Meta Learning for Automatic Designing Pretext Tasks for Self-Supervised Image Analysis
Authors: Toktam Khatibi
Abstract:
Self-supervised learning (SSL) includes machine learning models that are trained on one aspect and/or one part of the input to learn other aspects and/or part of it. SSL models are divided into two different categories, including pre-text task-based models and contrastive learning ones. Pre-text tasks are some auxiliary tasks learning pseudo-labels, and the trained models are further fine-tuned for downstream tasks. However, one important disadvantage of SSL using pre-text task solving is defining an appropriate pre-text task for each image dataset with a variety of image modalities. Therefore, it is required to design an appropriate pretext task automatically for each dataset and each downstream task. To the best of our knowledge, the automatic designing of pretext tasks for image analysis has not been considered yet. In this paper, we present a framework based on In-context learning that describes each task based on its input and output data using a pre-trained image transformer. Our proposed method combines the input image and its learned description for optimizing the pre-text task design and its hyper-parameters using Meta-learning models. The representations learned from the pre-text tasks are fine-tuned for solving the downstream tasks. We demonstrate that our proposed framework outperforms the compared ones on unseen tasks and image modalities in addition to its superior performance for previously known tasks and datasets.Keywords: in-context learning (ICL), meta learning, self-supervised learning (SSL), vision-language domain, transformers
Procedia PDF Downloads 807398 The Complexity of Testing Cryptographic Devices on Input Faults
Authors: Alisher Ikramov, Gayrat Juraev
Abstract:
The production of logic devices faces the occurrence of faults during manufacturing. This work analyses the complexity of testing a special type of logic device on inverse, adhesion, and constant input faults. The focus of this work is on devices that implement cryptographic functions. The complexity values for the general case faults and for some frequently occurring subsets were determined and proved in this work. For a special case, when the length of the text block is equal to the length of the key block, the complexity of testing is proven to be asymptotically half the complexity of testing all logic devices on the same types of input faults.Keywords: complexity, cryptographic devices, input faults, testing
Procedia PDF Downloads 2257397 Design and Simulation of 3-Transistor Active Pixel Sensor Using MATLAB Simulink
Authors: H. Alheeh, M. Alameri, A. Al Tarabsheh
Abstract:
There has been a growing interest in CMOS-based sensors technology in cameras as they afford low-power, small-size, and cost-effective imaging systems. This article describes the CMOS image sensor pixel categories and presents the design and the simulation of the 3-Transistor (3T) Active Pixel Sensor (APS) in MATLAB/Simulink tool. The analysis investigates the conversion of the light into an electrical signal for a single pixel sensing circuit, which consists of a photodiode and three NMOS transistors. The paper also proposes three modes for the pixel operation; reset, integration, and readout modes. The simulations of the electrical signals for each of the studied modes of operation show how the output electrical signals are correlated to the input light intensities. The charging/discharging speed for the photodiodes is also investigated. The output voltage for different light intensities, including in dark case, is calculated and showed its inverse proportionality with the light intensity.Keywords: APS, CMOS image sensor, light intensities photodiode, simulation
Procedia PDF Downloads 1777396 Graph Based Traffic Analysis and Delay Prediction Using a Custom Built Dataset
Authors: Gabriele Borg, Alexei Debono, Charlie Abela
Abstract:
There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale. Furthermore, a series of traffic prediction graph neural network models are conducted to compare MalTra to large-scale traffic datasets.Keywords: graph neural networks, traffic management, big data, mobile data patterns
Procedia PDF Downloads 1307395 Paper-Like and Battery Free Sensor Patches for Wound Monitoring
Authors: Xiaodi Su, Xin Ting Zheng, Laura Sutarlie, Nur Asinah binte Mohamed Salleh, Yong Yu
Abstract:
Wound healing is a dynamic process with multiple phases. Rapid profiling and quantitative characterization of inflammation and infection remain challenging. We have developed paper-like battery-free multiplexed sensors for holistic wound assessment via quantitative detection of multiple inflammation and infection markers. In one of the designs, the sensor patch consists of a wax-printed paper panel with five colorimetric sensor channels arranged in a pattern resembling a five-petaled flower (denoted as a ‘Petal’ sensor). The five sensors are for temperature, pH, trimethylamine, uric acid, and moisture. The sensor patch is sandwiched between a top transparent silicone layer and a bottom adhesive wound contact layer. In the second design, a palm-like-shaped paper strip is fabricated by a paper-cutter printer (denoted as ‘Palm’ sensor). This sensor strip carries five sensor regions connected by a stem sampling entrance that enables rapid colorimetric detection of multiple bacteria metabolites (aldehyde, lactate, moisture, trimethylamine, tryptophan) from wound exudate. For both the “\’ Petal’ and ‘Palm’ sensors, color images can be captured by a mobile phone. According to the color changes, one can quantify the concentration of the biomarkers and then determine wound healing status and identify/quantify bacterial species in infected wounds. The ‘Petal’ and ‘Palm’ sensors are validated with in-situ animal and ex-situ skin wound models, respectively. These sensors have the potential for integration with wound dressing to allow early warning of adverse events without frequent removal of the plasters. Such in-situ and early detection of non-healing condition can trigger immediate clinical intervention to facilitate wound care management.Keywords: wound infection, colorimetric sensor, paper fluidic sensor, wound care
Procedia PDF Downloads 817394 Performance Evaluation of Sand Casting Manufacturing Plant with WITNESS
Authors: Aniruddha Joshi
Abstract:
This paper discusses a simulation study of automated sand casting production system. Therefore, the first aims of this study is development of automated sand casting process model and analyze this model with a simulation software Witness. Production methodology aims to improve overall productivity through elimination of wastes and that leads to improve quality. Integration of automation with Simulation is beneficial to identify the obstacles in implementation and to take appropriate options to implement successfully. For this integration, there are different Simulation Software’s. To study this integration, with the help of “WITNESS” Simulation Software the model is created. This model is based on literature review. The input parameters are Setup Time, Number of machines, cycle time and output parameter is number of castings, avg, and time and percentage usage of machines. Obtained results are used for Statistical Analysis. This analysis concludes the optimal solution to get maximum output.Keywords: automated sand casting production system, simulation, WITNESS software, performance evaluation
Procedia PDF Downloads 7897393 A Simulation Study on the Applicability of Overbooking Strategies in Inland Container Transport
Authors: S. Fazi, B. Behdani
Abstract:
The inland transportation of maritime containers entails the use of different modalities whose capacity is typically booked in advance. Containers may miss their scheduled departure time at a terminal for several reasons, such as delays, change of transport modes, multiple bookings pending. In those cases, it may be difficult for transport service providers to find last minute containers to fill the vacant capacity. Similarly to other industries, overbooking could potentially limit these drawbacks at the cost of a lower service level in case of actual excess of capacity in overbooked rides. However, the presence of multiple modalities may provide the required flexibility in rescheduling and limit the dissatisfaction of the shippers in case of containers in overbooking. This flexibility is known with the term 'synchromodality'. In this paper, we evaluate via discrete event simulation the application of overbooking. Results show that in certain conditions overbooking can significantly increase profit and utilization of high-capacity means of transport, such as barges and trains. On the other hand, in case of high penalty costs and limited no-show, overbooking may lead to an excessive use of expensive trucks.Keywords: discrete event simulation, flexibility, inland shipping, multimodality, overbooking
Procedia PDF Downloads 1347392 Android – Based Wireless Electronic Stethoscope
Authors: Aw Adi Arryansyah
Abstract:
Using electronic stethoscope for detecting heartbeat sound, and breath sounds, are the effective way to investigate cardiovascular diseases. On the other side, technology is growing towards mobile. Almost everyone has a smartphone. Smartphone has many platforms. Creating mobile applications also became easier. We also can use HTML5 technology to creating mobile apps. Android is the most widely used type. This is the reason for us to make a wireless electronic stethoscope based on Android mobile. Android based Wireless Electronic Stethoscope designed by a simple system, uses sound sensors mounted membrane, then connected with Bluetooth module which will send the heart auscultation voice input data by Bluetooth signal to an android platform. On the software side, android will read the voice input then it will translate to beautiful visualization and release the voice output which can be regulated about how much of it is going to be released. We can change the heart beat sound into BPM data, and heart beat analysis, like normal beat, bradycardia or tachycardia.Keywords: wireless, HTML 5, auscultation, bradycardia, tachycardia
Procedia PDF Downloads 3477391 Improved Super-Resolution Using Deep Denoising Convolutional Neural Network
Authors: Pawan Kumar Mishra, Ganesh Singh Bisht
Abstract:
Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image.Keywords: resolution, deep-learning, neural network, de-blurring
Procedia PDF Downloads 5177390 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning
Authors: Abdullah Bal
Abstract:
This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification
Procedia PDF Downloads 217389 Reasons to Redesign: Teacher Education for a Brighter Tomorrow
Authors: Deborah L. Smith
Abstract:
To review our program and determine the best redesign options, department members gathered feedback and input through focus groups, analysis of data, and a review of the current research to ensure that the changes proposed were not based solely on the state’s new professional standards. In designing course assignments and assessments, we listened to a variety of constituents, including students, other institutions of higher learning, MDE webinars, host teachers, literacy clinic personnel, and other disciplinary experts. As a result, we are designing a program that is more inclusive of a variety of field experiences for growth. We have determined ways to improve our program by connecting academic disciplinary knowledge, educational psychology, and community building both inside and outside the classroom for professional learning communities. The state’s release of new professional standards led my department members to question what is working and what needs improvement in our program. One aspect of our program that continues to be supported by research and data analysis is the function of supervised field experiences with meaningful feedback. We seek to expand in this area. Other data indicate that we have strengths in modeling a variety of approaches such as cooperative learning, discussions, literacy strategies, and workshops. In the new program, field assignments will be connected to multiple courses, and efforts to scaffold student learning to guide them toward best evidence-based practices will be continuous. Despite running a program that meets multiple sets of standards, there are areas of need that we directly address in our redesign proposal. Technology is ever-changing, so it’s inevitable that improving digital skills is a focus. In addition, scaffolding procedures for English Language Learners (ELL) or other students who struggle is imperative. Diversity, equity, and inclusion (DEI) has been an integral part of our curriculum, but the research indicates that more self-reflection and a deeper understanding of culturally relevant practices would help the program improve. Connections with professional learning communities will be expanded, as will leadership components, so that teacher candidates understand their role in changing the face of education. A pilot program will run in academic year 22/23, and additional data will be collected each semester through evaluations and continued program review.Keywords: DEI, field experiences, program redesign, teacher preparation
Procedia PDF Downloads 1697388 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 2247387 Numerical Modeling of the Seismic Site Response in the Firenze Metropolitan Area
Authors: Najmeh Ayoqi, Emanuele Marchetti
Abstract:
OpenSWPC was used to model 2D and 3D seismic waveforms produced by various earthquakes in the Firenze metropolitan area. OpenSWPC is an Opens source code for simulation of seismic wave by using the finite difference method (FDM) in Message Passing Interface (MPI) environment. it considered both earthquake sources, with variable magnitude and location, as well as a pulse source in the modeling domain, which is optimal to simulate local seismic amplification effects. Multiple tests were performed to evaluate the dependence of the frequency content of output modeled waveforms on the model grid size and time steps . Moreover the effect of the velocity structure and absorbing boundary condition on waveform features (amplitude, duration and frequency content) where analysed. Eventually model results are compared with real waveform and Horizontal-to-Vertical spectral Ratio (HVSR) , showing that seismic wave modeling can provide important information on seismic assessment in the city.Keywords: openSWPC, earthquake, firenze, HVSR, seismic wave
Procedia PDF Downloads 177386 Design and Development of an Autonomous Underwater Vehicle for Irrigation Canal Monitoring
Authors: Mamoon Masud, Suleman Mazhar
Abstract:
Indus river basin’s irrigation system in Pakistan is extremely complex, spanning over 50,000 km. Maintenance and monitoring of this demands enormous resources. This paper describes the development of a streamlined and low-cost autonomous underwater vehicle (AUV) for the monitoring of irrigation canals including water quality monitoring and water theft detection. The vehicle is a hovering-type AUV, designed mainly for monitoring irrigation canals, with fully documented design and open source code. It has a length of 17 inches, and a radius of 3.5 inches with a depth rating of 5m. Multiple sensors are present onboard the AUV for monitoring water quality parameters including pH, turbidity, total dissolved solids (TDS) and dissolved oxygen. A 9-DOF Inertial Measurement Unit (IMU), GY-85, is used, which incorporates an Accelerometer (ADXL345), a Gyroscope (ITG-3200) and a Magnetometer (HMC5883L). The readings from these sensors are fused together using directional cosine matrix (DCM) algorithm, providing the AUV with the heading angle, while a pressure sensor gives the depth of the AUV. 2 sonar-based range sensors are used for obstacle detection, enabling the vehicle to align itself with the irrigation canals edges. 4 thrusters control the vehicle’s surge, heading and heave, providing 3 DOF. The thrusters are controlled using a proportional-integral-derivative (PID) feedback control system, with heading angle and depth being the controller’s input and the thruster motor speed as the output. A flow sensor has been incorporated to monitor canal water level to detect water-theft event in the irrigation system. In addition to water theft detection, the vehicle also provides information on water quality, providing us with the ability to identify the source(s) of water contamination. Detection of such events can provide useful policy inputs for improving irrigation efficiency and reducing water contamination. The AUV being low cost, small sized and suitable for autonomous maneuvering, water level and quality monitoring in the irrigation canals, can be used for irrigation network monitoring at a large scale.Keywords: the autonomous underwater vehicle, irrigation canal monitoring, water quality monitoring, underwater line tracking
Procedia PDF Downloads 1477385 Beliefs, Practices and Identity about Bilingualism: Korean-australian Immigrant Parents and Family Language Policies
Authors: Eun Kyong Park
Abstract:
This study explores the relationships between immigrant parents’ beliefs about bilingualism, family literacy practices, and their children’s identity development in Sydney, Australia. This project examines how these parents’ ideological beliefs and knowledge are related to their provision of family literacy practices and management of the environment for their bilingual children based on family language policy (FLP). This is a follow-up study of the author’s prior thesis that presented Korean immigrant mothers’ beliefs and decision-making in support of their children’s bilingualism. It includes fathers’ perspectives within the participating families as a whole by foregrounding their perceptions of bilingual and identity development. It adopts a qualitative approach with twelve immigrant mothers and fathers living in a Korean-Australian community whose child attends one of the communities Korean language programs. This time, it includes introspective and self-evocative auto-ethnographic data. The initial data set collected from the first part of this study demonstrated the mothers provided rich, diverse, and specific family literacy activities for their children. These mothers selected specific practices to facilitate their child’s bilingual development at home. The second part of data has been collected over a three month period: 1) a focus group interview with mothers; 2) a brief self-report of fathers; 3) the researcher’s reflective diary. To analyze these multiple data, thematic analysis and coding were used to reveal the parents’ ideologies surrounding bilingualism and bilingual identities. It will highlight the complexity of language and literacy practices in the family domain interrelated with sociocultural factors. This project makes an original contribution to the field of bilingualism and FLP and a methodological contribution by introducing auto-ethnographic input of this community’s lived practices. This project will empower Korean-Australian immigrant families and other multilingual communities to reflect their beliefs and practices for their emerging bilingual children. It will also enable educators and policymakers to access authentic information about how bilingualism is practiced within these immigrant families in multiple ways and to help build the culturally appropriate partnership between home and school community.Keywords: bilingualism, beliefs, identity, family language policy, Korean immigrant parents in Australia
Procedia PDF Downloads 1367384 Hyper-Immunoglobulin E (Hyper-Ige) Syndrome In Skin Of Color: A Retrospective Single-Centre Observational Study
Authors: Rohit Kothari, Muneer Mohamed, Vivekanandh K., Sunmeet Sandhu, Preema Sinha, Anuj Bhatnagar
Abstract:
Introduction: Hyper-IgE syndrome is a rare primary immunodeficiency syndrome characterised by triad of severe atopic dermatitis, recurrent pulmonary infections, and recurrent staphylococcal skin infections. The diagnosis requires a high degree of suspicion, typical clinical features, and not mere rise in serum-IgE levels, which may be seen in multiple conditions. Genetic studies are not always possible in a resource poor setting. This study highlights various presentations of Hyper-IgE syndrome in skin of color children. Case-series: Our study had six children of Hyper-IgE syndrome aged twomonths to tenyears. All had onset in first ten months of life except one with a late-onset at two years. All had recurrent eczematoid rash, which responded poorly to conventional treatment, secondary infection, multiple episodes of hospitalisation for pulmonary infection, and raised serum IgE levels. One case had occasional vesicles, bullae, and crusted plaques over both the extremities. Genetic study was possible in only one of them who was found to have pathogenic homozygous deletions of exon-15 to 18 in DOCK8 gene following which he underwent bone marrow transplant (BMT), however, succumbed to lower respiratory tract infection two months after BMT and rest of them received multiple courses of antibiotics, oral/ topical steroids, and cyclosporine intermittently with variable response. Discussion: Our study highlights various characteristics, presentation, and management of this rare syndrome in children. Knowledge of these manifestations in skin of color will facilitate early identification and contribute to optimal care of the patients as representative data on the same is limited in literature.Keywords: absolute eosinophil count, atopic dermatitis, eczematous rash, hyper-immunoglobulin E syndrome, pulmonary infection, serum IgE, skin of color
Procedia PDF Downloads 1387383 The Sr-Nd Isotope Data of the Platreef Rocks from the Northern Limb of the Bushveld Igneous Complex: Evidence of Contrasting Magma Composition and Origin
Authors: Tshipeng Mwenze, Charles Okujeni, Abdi Siad, Russel Bailie, Dirk Frei, Marcelene Voigt, Petrus Le Roux
Abstract:
The Platreef is a platinum group element (PGE) deposit in the northern limb of the Bushveld Igneous Complex (BIC) which was emplaced as a series of mafic and ultramafic sills between the Main Zone (MZ) and the country rocks. The PGE mineralisation in the Platreef is hosted in different rock types, and its distribution and style vary with depth and along strike. This study contributes towards understanding the processes involved in the genesis of the Platreef. Twenty-four Platreef (2 harzburgites, 4 olivine pyroxenites, 17 feldspathic pyroxenites and 1 gabbronorite) and few MZ (1 gabbronorite and 1 leucogabbronorite) quarter core samples were collected from four drill cores (e.g., TN754, TN200, SS339, and OY482) and analysed for whole-rock Sr-Nd isotope data. The results show positive ɛNd values (+3.53 to +7.51) for harzburgites suggesting their parental magmas derived from the depleted Mantle. The remaining Platreef rocks have negative ɛNd values (-2.91 to -22.88) and show significant variations in Sr-Nd isotopic compositions. The first group of Platreef samples has relatively high isotopic compositions (ɛNd= -2.91 to -5.68; ⁸⁷Sr/⁸⁶Sri= 0.709177 - 0.711998). The second group of Platreef samples has Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.709816-0.712106) overlapping with samples of the first group but slightly lower ɛNd values (-7.44 to -8.39). Lastly, the third group of Platreef samples has low ɛNd values (-10.82 to -14.32) and low Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.707545-0.710042) than those from samples of the two Platreef groups mentioned above. There is, however, a Platreef sample with ɛNd value (-5.26) in range with the Platreef samples of the first group, but its Sr ratio (0.707281) is the lowest even when compared to samples of the third Platreef group. There are also five other Platreef samples which have either anomalous ɛNd or Sr ratios which make it difficult to assess their isotopic compositions relative to other samples. These isotopic variations for the Platreef samples indicate both multiple sources and multiple magma chambers where varying crustal contamination styles have operated during the evolution of these magmas prior their emplacements into the Platreef setting as sills. Furthermore, the MZ rocks have different Sr-Nd isotopic compositions (For OY482 gabbronorite [ɛNd= +0.65; ⁸⁷Sr/⁸⁶Sri= 0.711746]; for TN754 leucogabbronorite [ɛNd= -7.44; ⁸⁷Sr/⁸⁶Sri= 0.709322]) which do not only indicate different MZ magma chambers, but also different magmas from those of the Platreef. Although the Platreef is still considered a single stratigraphic unit in the northern limb of the BIC, its genesis involved multiple magmatic processes which evolved independently from each other.Keywords: crustal contamination styles, magma chambers, magma sources, multiple sills emplacement
Procedia PDF Downloads 1677382 Heroin Withdrawal, Prison and Multiple Temporalities
Authors: Ian Walmsley
Abstract:
The aim of this paper is to explore the influence of time and temporality on the experience of coming off heroin in prison. The presentation draws on qualitative data collected during a small-scale pilot study of the role of self-care in the process of coming off drugs in prison. Time and temporality emerged as a key theme in the interview transcripts. Drug dependent prisoners experience of time in prison has not been recognized in the research literature. Instead, the literature on prison time typically views prisoners as a homogenous group or tends to focus on the influence of aging and gender on prison time. Furthermore, there is a tendency in the literature on prison drug treatment and recovery to conceptualize drug dependent prisoners as passive recipients of prison healthcare, rather than active agents. In building on these gaps, this paper argues that drug dependent prisoners experience multiple temporalities which involve an interaction between the body-times of the drug dependent prisoner and the economy of time in prison. One consequence of this interaction is the feeling that they are doing, at this point in their prison sentence, double prison time. The second part of the argument is that time and temporality were a means through which they governed their withdrawing bodies. In addition, this paper will comment on the challenges of prison research in England.Keywords: heroin withdrawal, time and temporality, prison, body
Procedia PDF Downloads 2767381 Effectiveness of Using Multiple Non-pharmacological Interventions to Prevent Delirium in the Hospitalized Elderly
Authors: Yi Shan Cheng, Ya Hui Yeh, Hsiao Wen Hsu
Abstract:
Delirium is an acute state of confusion, which is mainly the result of the interaction of many factors, including: age>65 years, comorbidity, cognitive function and visual/auditory impairment, dehydration, pain, sleep disorder, pipeline retention, general anesthesia and major surgery… etc. Researches show the prevalence of delirium in hospitalized elderly patients over 50%. If it doesn't improve in time, may cause cognitive decline or impairment, not only prolong the length of hospital stay but also increase mortality. Some studies have shown that multiple nonpharmacological interventions are the most effective and common strategies, which are reorientation, early mobility, promoting sleep and nutritional support (including water intake), could improve or prevent delirium in the hospitalized elderly. In Taiwan, only one research to compare the delirium incidence of the older patients who have received orthopedic surgery between multi-nonpharmacological interventions and general routine care. Therefore, the purpose of this study is to address the prevention or improvement of delirium incidence density in medical hospitalized elderly, provide clinical nurses as a reference for clinical implementation, and develop follow-up related research. This study is a quasi-experimental design using purposive sampling. Samples are from two wards: the geriatric ward and the general medicine ward at a medical center in central Taiwan. The sample size estimated at least 100, and then the data will be collected through a self-administered structured questionnaire, including: demographic and professional evaluation items. Case recruiting from 5/13/2023. The research results will be analyzed by SPSS for Windows 22.0 software, including descriptive statistics and inferential statistics: logistic regression、Generalized Estimating Equation(GEE)、multivariate analysis of variance(MANOVA).Keywords: multiple nonpharmacological interventions, hospitalized elderly, delirium incidence, delirium
Procedia PDF Downloads 787380 Finding a Set of Long Common Substrings with Repeats from m Input Strings
Authors: Tiantian Li, Lusheng Wang, Zhaohui Zhan, Daming Zhu
Abstract:
In this paper, we propose two string problems, and study algorithms and complexity of various versions for those problems. Let S = {s₁, s₂, . . . , sₘ} be a set of m strings. A common substring of S is a substring appearing in every string in S. Given a set of m strings S = {s₁, s₂, . . . , sₘ} and a positive integer k, we want to find a set C of k common substrings of S such that the k common substrings in C appear in the same order and have no overlap among the m input strings in S, and the total length of the k common substring in C is maximized. This problem is referred to as the longest total length of k common substrings from m input strings (LCSS(k, m) for short). The other problem we study here is called the longest total length of a set of common substrings with length more than l from m input string (LSCSS(l, m) for short). Given a set of m strings S = {s₁, s₂, . . . , sₘ} and a positive integer l, for LSCSS(l, m), we want to find a set of common substrings of S, each is of length more than l, such that the total length of all the common substrings is maximized. We show that both problems are NP-hard when k and m are variables. We propose dynamic programming algorithms with time complexity O(k n₁n₂) and O(n₁n₂) to solve LCSS(k, 2) and LSCSS(l, 2), respectively, where n1 and n₂ are the lengths of the two input strings. We then design an algorithm for LSCSS(l, m) when every length > l common substring appears once in each of the m − 1 input strings. The running time is O(n₁²m), where n1 is the length of the input string with no restriction on length > l common substrings. Finally, we propose a fixed parameter algorithm for LSCSS(l, m), where each length > l common substring appears m − 1 + c times among the m − 1 input strings (other than s1). In other words, each length > l common substring may repeatedly appear at most c times among the m − 1 input strings {s₂, s₃, . . . , sₘ}. The running time of the proposed algorithm is O((n12ᶜ)²m), where n₁ is the input string with no restriction on repeats. The LSCSS(l, m) is proposed to handle whole chromosome sequence alignment for different strains of the same species, where more than 98% of letters in core regions are identical.Keywords: dynamic programming, algorithm, common substrings, string
Procedia PDF Downloads 137379 The Importance of Functioning and Disability Status Follow-Up in People with Multiple Sclerosis
Authors: Sanela Slavkovic, Congor Nad, Spela Golubovic
Abstract:
Background: The diagnosis of multiple sclerosis (MS) is a major life challenge and has repercussions on all aspects of the daily functioning of those attained by it – personal activities, social participation, and quality of life. Regular follow-up of only the neurological status is not informative enough so that it could provide data on the sort of support and rehabilitation that is required. Objective: The aim of this study was to establish the current level of functioning of persons attained by MS and the factors that influence it. Methods: The study was conducted in Serbia, on a sample of 108 persons with relapse-remitting form of MS, aged 20 to 53 (mean 39.86 years; SD 8.20 years). All participants were fully ambulatory. Methods applied in the study include Expanded Disability Status Scale-EDSS and World Health Organization Disability Assessment Schedule, WHODAS 2.0 (36-item version, self-administered). Results: Participants were found to experience the most problems in the domains of Participation, Mobility, Life activities and Cognition. The least difficulties were found in the domain of Self-care. Symptom duration was the only control variable with a significant partial contribution to the prediction of the WHODAS scale score (β=0.30, p < 0.05). The total EDSS score correlated with the total WHODAS 2.0 score (r=0.34, p=0.00). Statistically significant differences in the domain of EDSS 0-5.5 were found within categories (0-1.5; 2-3.5; 4-5.5). The more pronounced a participant’s EDSS score was, although not indicative of large changes in the neurological status, the more apparent the changes in the functional domain, i.e. in all areas covered by WHODAS 2.0. Pyramidal (β=0.34, p < 0.05) and Bowel and bladder (β=0.24, p < 0.05) functional systems were found to have a significant partial contribution to the prediction of the WHODAS score. Conclusion: Measuring functioning and disability is important in the follow-up of persons suffering from MS in order to plan rehabilitation and define areas in which additional support is needed.Keywords: disability, functionality, multiple sclerosis, rehabilitation
Procedia PDF Downloads 1207378 The Effects of Learning Engagement on Interpreting Performance among English Major Students
Authors: Jianhua Wang, Ying Zhou, Xi Zhang
Abstract:
To establish the influential mechanism of learning engagement on interpreter’s performance, the present study submitted a questionnaire to a sample of 927 English major students with 804 valid ones and used the structural equation model as the basis for empirical analysis and statistical inference on the sample data. In order to explore the mechanism for interpreting learning engagement on student interpreters’ performance, a path model of interpreting processes with three variables of ‘input-environment-output’ was constructed. The results showed that the effect of each ‘environment’ variable on interpreting ability was different from and greater than the ‘input’ variable, and learning engagement was the greatest influencing factor. At the same time, peer interaction on interpreting performance has significant influence. Results suggest that it is crucial to provide effective guidance for optimizing learning engagement and interpreting teaching research by both improving the environmental support and building the platform of peer interaction, beginning with learning engagement.Keywords: learning engagement, interpreting performance, interpreter training, English major students
Procedia PDF Downloads 2077377 High-Frequency Cryptocurrency Portfolio Management Using Multi-Agent System Based on Federated Reinforcement Learning
Authors: Sirapop Nuannimnoi, Hojjat Baghban, Ching-Yao Huang
Abstract:
Over the past decade, with the fast development of blockchain technology since the birth of Bitcoin, there has been a massive increase in the usage of Cryptocurrencies. Cryptocurrencies are not seen as an investment opportunity due to the market’s erratic behavior and high price volatility. With the recent success of deep reinforcement learning (DRL), portfolio management can be modeled and automated. In this paper, we propose a novel DRL-based multi-agent system to automatically make proper trading decisions on multiple cryptocurrencies and gain profits in the highly volatile cryptocurrency market. We also extend this multi-agent system with horizontal federated transfer learning for better adapting to the inclusion of new cryptocurrencies in our portfolio; therefore, we can, through the concept of diversification, maximize our profits and minimize the trading risks. Experimental results through multiple simulation scenarios reveal that this proposed algorithmic trading system can offer three promising key advantages over other systems, including maximized profits, minimized risks, and adaptability.Keywords: cryptocurrency portfolio management, algorithmic trading, federated learning, multi-agent reinforcement learning
Procedia PDF Downloads 1197376 Recent Advances in Pulse Width Modulation Techniques and Multilevel Inverters
Authors: Satish Kumar Peddapelli
Abstract:
This paper presents advances in pulse width modulation techniques which refers to a method of carrying information on train of pulses and the information be encoded in the width of pulses. Pulse Width Modulation is used to control the inverter output voltage. This is done by exercising the control within the inverter itself by adjusting the ON and OFF periods of inverter. By fixing the DC input voltage we get AC output voltage. In variable speed AC motors the AC output voltage from a constant DC voltage is obtained by using inverter. Recent developments in power electronics and semiconductor technology have lead improvements in power electronic systems. Hence, different circuit configurations namely multilevel inverters have become popular and considerable interest by researcher are given on them. A fast Space-Vector Pulse Width Modulation (SVPWM) method for five-level inverter is also discussed. In this method, the space vector diagram of the five-level inverter is decomposed into six space vector diagrams of three-level inverters. In turn, each of these six space vector diagrams of three-level inverter is decomposed into six space vector diagrams of two-level inverters. After decomposition, all the remaining necessary procedures for the three-level SVPWM are done like conventional two-level inverter. The proposed method reduces the algorithm complexity and the execution time. It can be applied to the multilevel inverters above the five-level also. The experimental setup for three-level diode-clamped inverter is developed using TMS320LF2407 DSP controller and the experimental results are analysed.Keywords: five-level inverter, space vector pulse wide modulation, diode clamped inverter, electrical engineering
Procedia PDF Downloads 3887375 Glucose Monitoring System Using Machine Learning Algorithms
Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe
Abstract:
The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning
Procedia PDF Downloads 203