Search results for: time delay neural network
19268 Neuroecological Approach for Anthropological Studies in Archaeology
Authors: Kalangi Rodrigo
Abstract:
The term Neuroecology elucidates the study of customizable variation in cognition and the brain. Subject marked the birth since 1980s, when researches began to apply methods of comparative evolutionary biology to cognitive processes and the underlying neural mechanisms of cognition. In Archaeology and Anthropology, we observe behaviors such as social learning skills, innovative feeding and foraging, tool use and social manipulation to determine the cognitive processes of ancient mankind. Depending on the brainstem size was used as a control variable, and phylogeny was controlled using independent contrasts. Both disciplines need to enriched with comparative literature and neurological experimental, behavioral studies among tribal peoples as well as primate groups which will lead the research to a potential end. Neuroecology examines the relations between ecological selection pressure and mankind or sex differences in cognition and the brain. The goal of neuroecology is to understand how natural law acts on perception and its neural apparatus. Furthermore, neuroecology will eventually lead both principal disciplines to Ethology, where human behaviors and social management studies from a biological perspective. It can be either ethnoarchaeological or prehistoric. Archaeology should adopt general approach of neuroecology, phylogenetic comparative methods can be used in the field, and new findings on the cognitive mechanisms and brain structures involved mating systems, social organization, communication and foraging. The contribution of neuroecology to archaeology and anthropology is the information it provides on the selective pressures that have influenced the evolution of cognition and brain structure of the mankind. It will shed a new light to the path of evolutionary studies including behavioral ecology, primate archaeology and cognitive archaeology.Keywords: Neuroecology, Archaeology, Brain Evolution, Cognitive Archaeology
Procedia PDF Downloads 12019267 DUSP16 Inhibition Rescues Neurogenic and Cognitive Deficits in Alzheimer's Disease Mice Models
Authors: Huimin Zhao, Xiaoquan Liu, Haochen Liu
Abstract:
The major challenge facing Alzheimer's Disease (AD) drug development is how to effectively improve cognitive function in clinical practice. Growing evidence indicates that stimulating hippocampal neurogenesis is a strategy for restoring cognition in animal models of AD. The mitogen-activated protein kinase (MAPK) pathway is a crucial factor in neurogenesis, which is negatively regulated by Dual-specificity phosphatase 16 (DUSP16). Transcriptome analysis of post-mortem brain tissue revealed up-regulation of DUSP16 expression in AD patients. Additionally, DUSP16 was involved in regulating the proliferation and neural differentiation of neural progenitor cells (NPCs). Nevertheless, whether the effect of DUSP16 on ameliorating cognitive disorders by influencing NPCs differentiation in AD mice remains unclear. Our study demonstrates an association between DUSP16 SNPs and clinical progression in individuals with mild cognitive impairment (MCI). Besides, we found that increased DUSP16 expression in both 3×Tg and SAMP8 models of AD led to NPC differentiation impairments. By silencing DUSP16, cognitive benefits, the induction of AHN and synaptic plasticity, were observed in AD mice. Furthermore, we found that DUSP16 is involved in the process of NPC differentiation by regulating c-Jun N-terminal kinase (JNK) phosphorylation. Moreover, the increased DUSP16 may be regulated by the ETS transcription factor (ELK1), which binds to the promoter region of DUSP16. Loss of ELK1 resulted in decreased DUSP16 mRNA and protein levels. Our data uncover a potential regulatory role for DUSP16 in adult hippocampal neurogenesis and provide a possibility to find the target of AD intervention.Keywords: alzheimer's disease, cognitive function, DUSP16, hippocampal neurogenesis
Procedia PDF Downloads 7219266 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 7619265 A Design of the Infrastructure and Computer Network for Distance Education, Online Learning via New Media, E-Learning and Blended Learning
Authors: Sumitra Nuanmeesri
Abstract:
The research focus on study, analyze and design the model of the infrastructure and computer networks for distance education, online learning via new media, e-learning and blended learning. The collected information from study and analyze process that information was evaluated by the index of item objective congruence (IOC) by 9 specialists to design model. The results of evaluate the model with the mean and standard deviation by the sample of 9 specialists value is 3.85. The results showed that the infrastructure and computer networks are designed to be appropriate to a great extent appropriate to a great extent.Keywords: blended learning, new media, infrastructure and computer network, tele-education, online learning
Procedia PDF Downloads 40219264 Enhancement of Road Defect Detection Using First-Level Algorithm Based on Channel Shuffling and Multi-Scale Feature Fusion
Authors: Yifan Hou, Haibo Liu, Le Jiang, Wandong Su, Binqing Wang
Abstract:
Road defect detection is crucial for modern urban management and infrastructure maintenance. Traditional road defect detection methods mostly rely on manual labor, which is not only inefficient but also difficult to ensure their reliability. However, existing deep learning-based road defect detection models have poor detection performance in complex environments and lack robustness to multi-scale targets. To address this challenge, this paper proposes a distinct detection framework based on the one stage algorithm network structure. This article designs a deep feature extraction network based on RCSDarknet, which applies channel shuffling to enhance information fusion between tensors. Through repeated stacking of RCS modules, the information flow between different channels of adjacent layer features is enhanced to improve the model's ability to capture target spatial features. In addition, a multi-scale feature fusion mechanism with weighted dual flow paths was adopted to fuse spatial features of different scales, thereby further improving the detection performance of the model at different scales. To validate the performance of the proposed algorithm, we tested it using the RDD2022 dataset. The experimental results show that the enhancement algorithm achieved 84.14% mAP, which is 1.06% higher than the currently advanced YOLOv8 algorithm. Through visualization analysis of the results, it can also be seen that our proposed algorithm has good performance in detecting targets of different scales in complex scenes. The above experimental results demonstrate the effectiveness and superiority of the proposed algorithm, providing valuable insights for advancing real-time road defect detection methods.Keywords: roads, defect detection, visualization, deep learning
Procedia PDF Downloads 919263 “Friction Surfaces” of Airport Emergency Plan
Authors: Jakub Kraus, Vladimír Plos, Peter Vittek
Abstract:
This article focuses on the issue of airport emergency plans, which are documents describing reactions to events with impact on aviation safety or aviation security. The article specifically focuses on the use and creation of emergency plans, where could be found a number of disagreements between different stakeholders, for which the airport emergency plan applies. Those are the friction surfaces of interfaces, which is necessary to identify and ensure them smooth process to avoid dangerous situations or delay.Keywords: airport emergency plan, aviation safety, aviation security, comprehensive management system, friction surfaces of airport emergency plan, interfaces of processes
Procedia PDF Downloads 51919262 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 4719261 Sensitivity, Specificity and Efficiency Real-Time PCR Using SYBR Green Method to Determine Porcine and Bovine DNA Using Specific Primer Cytochrome B Gene
Authors: Ahlam Inayatullah Badrul Munir, M. Husaini A. Rahman, Mohd Sukri Hassan
Abstract:
Real-time PCR is a molecular biology technique that is currently being widely used for halal services to differentiating between porcine and bovine DNA. The useful of technique become very important for student or workers (who works in the laboratory) to learn how the technique could be run smoothly without fail. Same concept with conventional PCR, real-time PCR also needed DNA template, primer, enzyme polymerase, dNTP, and buffer. The difference is in real-time PCR, have additional component namely fluorescent dye. The most common use of fluorescent dye in real-time PCR is SYBR green. The purpose of this study was to find out how sensitive, specific and efficient real-time PCR technique was combined with SYBR green method and specific primers of CYT b. The results showed that real-time PCR technique using SYBR Green, capable of detecting porcine and bovine DNA concentrations up to 0.0001 µl/ng. The level of efficiency for both types of DNA was 91% (90-110). Not only that in specific primer CYT b bovine primer could detect only bovine DNA, and porcine primer could detect only porcine primer. So, from the study could be concluded that real-time PCR technique that was combined with specific primer CYT b and SYBR green method, was sensitive, specific and efficient to detect porcine and bovine DNA.Keywords: sensitivity, specificity, efficiency, real-time PCR, SYBR green, Cytochrome b, porcine DNA, bovine DNA
Procedia PDF Downloads 31519260 Characterization and Correlation of Neurodegeneration and Biological Markers of Model Mice with Traumatic Brain Injury and Alzheimer's Disease
Authors: J. DeBoard, R. Dietrich, J. Hughes, K. Yurko, G. Harms
Abstract:
Alzheimer’s disease (AD) is a predominant type of dementia and is likely a major cause of neural network impairment. The pathogenesis of this neurodegenerative disorder has yet to be fully elucidated. There are currently no known cures for the disease, and the best hope is to be able to detect it early enough to impede its progress. Beyond age and genetics, another prevalent risk factor for AD might be traumatic brain injury (TBI), which has similar neurodegenerative hallmarks. Our research focuses on obtaining information and methods to be able to predict when neurodegenerative effects might occur at a clinical level by observation of events at a cellular and molecular level in model mice. First, we wish to introduce our evidence that brain damage can be observed via brain imaging prior to the noticeable loss of neuromuscular control in model mice of AD. We then show our evidence that some blood biomarkers might be able to be early predictors of AD in the same model mice. Thus, we were interested to see if we might be able to predict which mice might show long-term neurodegenerative effects due to differing degrees of TBI and what level of TBI causes further damage and earlier death to the AD model mice. Upon application of TBIs via an apparatus to effectively induce extremely mild to mild TBIs, wild-type (WT) mice and AD mouse models were tested for cognition, neuromuscular control, olfactory ability, blood biomarkers, and brain imaging. Experiments are currently still in process, and more results are therefore forthcoming. Preliminary data suggest that neuromotor control diminishes as well as olfactory function for both AD and WT mice after the administration of five consecutive mild TBIs. Also, seizure activity increases significantly for both AD and WT after the administration of the five TBI treatment. If future data supports these findings, important implications about the effect of TBI on those at risk for AD might be possible.Keywords: Alzheimer's disease, blood biomarker, neurodegeneration, neuromuscular control, olfaction, traumatic brain injury
Procedia PDF Downloads 14119259 How to Enhance Performance of Universities by Implementing Balanced Scorecard with Using FDM and ANP
Authors: Neda Jalaliyoon, Nooh Abu Bakar, Hamed Taherdoost
Abstract:
The present research recommended balanced scorecard (BSC) framework to appraise the performance of the universities. As the original model of balanced scorecard has four perspectives in order to implement BSC in present research the same model with “financial perspective”, “customer”,” internal process” and “learning and growth” is used as well. With applying fuzzy Delphi method (FDM) and questionnaire sixteen measures of performance were identified. Moreover, with using the analytic network process (ANP) the weights of the selected indicators were determined. Results indicated that the most important BSC’s aspect were Internal Process (0.3149), Customer (0.2769), Learning and Growth (0.2049), and Financial (0.2033) respectively. The proposed BSC framework can help universities to enhance their efficiency in competitive environment.Keywords: balanced scorecard, higher education, fuzzy delphi method, analytic network process (ANP)
Procedia PDF Downloads 42619258 Overview of a Quantum Model for Decision Support in a Sensor Network
Authors: Shahram Payandeh
Abstract:
This paper presents an overview of a model which can be used as a part of a decision support system when fusing information from multiple sensing environment. Data fusion has been widely studied in the past few decades and numerous frameworks have been proposed to facilitate decision making process under uncertainties. Multi-sensor data fusion technology plays an increasingly significant role during people tracking and activity recognition. This paper presents an overview of a quantum model as a part of a decision-making process in the context of multi-sensor data fusion. The paper presents basic definitions and relationships associating the decision-making process and quantum model formulation in the presence of uncertainties.Keywords: quantum model, sensor space, sensor network, decision support
Procedia PDF Downloads 22719257 GIS Based Public Transport Accessibility of Lahore using PTALs Model
Authors: Naveed Chughtai, Salman Atif, Azhar Ali Taj, Murtaza Asghar Bukhari
Abstract:
Accessible transport systems play a crucial role in infrastructure management and ease of access to destinations. Thus, the necessity of knowledge of service coverage and service deprived areas is a prerequisite for devising policies. Integration of PTALs model with GIS network analysis models (Service Area Analysis, Closest Facility Analysis) facilitates the analysis of deprived areas. In this research, models presented determine the accessibility. The empirical evidence suggests that current bus network system caters only 18.5% of whole population. Using network analysis results as inputs for PTALs, it is seen that excellent accessibility indexed bands cover a limited areas, while 78.8% of area is totally deprived of any service. To cater the unserved catchment, new route alignments are proposed while keeping in focus the Socio-economic characteristics, land-use type and net population density of the deprived area. Change in accessibility with proposed routes show a 10% increment in service delivery and enhancement in terms of served population is up to 20.4%. PTALs result shows a decrement of 60 Km2 in unserved band. The result of this study can be used for planning, transport infrastructure management, allocation of new route alignments in combination with future land-use development and for adequate spatial distribution of service access points.Keywords: GIS, public transport accessibility, PTALs, accessibility index, service area analysis, closest facility analysis
Procedia PDF Downloads 43819256 Applying Multiplicative Weight Update to Skin Cancer Classifiers
Authors: Animish Jain
Abstract:
This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer
Procedia PDF Downloads 7919255 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 8919254 Communication in a Heterogeneous Ad Hoc Network
Authors: C. Benjbara, A. Habbani
Abstract:
Wireless networks are getting more and more used in every new technology or feature, especially those without infrastructure (Ad hoc mode) which provide a low cost alternative to the infrastructure mode wireless networks and a great flexibility for application domains such as environmental monitoring, smart cities, precision agriculture, and so on. These application domains present a common characteristic which is the need of coexistence and intercommunication between modules belonging to different types of ad hoc networks like wireless sensor networks, mesh networks, mobile ad hoc networks, vehicular ad hoc networks, etc. This vision to bring to life such heterogeneous networks will make humanity duties easier but its development path is full of challenges. One of these challenges is the communication complexity between its components due to the lack of common or compatible protocols standard. This article proposes a new patented routing protocol based on the OLSR standard in order to resolve the heterogeneous ad hoc networks communication issue. This new protocol is applied on a specific network architecture composed of MANET, VANET, and FANET.Keywords: Ad hoc, heterogeneous, ID-Node, OLSR
Procedia PDF Downloads 21519253 The Efficacy of Psychological Interventions for Psychosis: A Systematic Review and Network Meta-Analysis
Authors: Radu Soflau, Lia-Ecaterina Oltean
Abstract:
Background: Increasing evidence supports the efficacy of psychological interventions for psychosis. However, it is unclear which one of these interventions is most likely to address negative psychotic symptoms and related outcomes. We aimed to determine the relative efficacy of psychological and psychosocial interventions for negative symptoms, overall psychotic symptoms, and related outcomes. Methods: To attain this goal, we conducted a systematic review and network meta-analysis. We searched for potentially eligible trials in PubMed, EMBASE, PsycInfo, Cochrane Central Register of Controlled Trials, and ClinicalTrials.gov databases up until February 08, 2022. We included randomized controlled trials that investigated the efficacy of psychological for adults with psychosis. We excluded interventions for prodromal or “at risk” individuals, as well as patients with serious co-morbid medical or psychiatric conditions (others than depressive and/or anxiety disorders). Two researchers conducted study selection and performed data extraction independently. Analyses were run using STATA network and mvmeta packages, applying a random effect model under a frequentist framework in order to compute standardized mean differences or risk ratio. Findings: We identified 47844 records and screened 29466 records for eligibility. The majority of eligible interventions were delivered in addition to pharmacological treatment. Treatment as usual (TAU) was the most frequent common comparator. Theoretically driven psychological interventions generally outperformed TAU at post-test and follow-up, displaying small and small-to-medium effect sizes. A similar pattern of results emerged in sensitivity analyses focused on studies that employed an inclusion criterion for relevant negative symptom severity. Conclusion: While the efficacy of some psychological interventions is promising, there is a need for more high-quality studies, as well as more trials directly comparing psychological treatments for negative psychotic symptoms.Keywords: psychosis, network meta-analysis, psychological interventions, efficacy, negative symptoms
Procedia PDF Downloads 10319252 Employing Remotely Sensed Soil and Vegetation Indices and Predicting by Long Short-Term Memory to Irrigation Scheduling Analysis
Authors: Elham Koohikerade, Silvio Jose Gumiere
Abstract:
In this research, irrigation is highlighted as crucial for improving both the yield and quality of potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate soil moisture content, addressing the limitations of field data. Developed under the guidance of the Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing drought conditions and determining irrigation needs. This study validated the spectral characteristics of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture was developed using a machine learning approach combining model-based and satellite-based datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and times, with its accuracy verified through cross-validation and comparison with existing soil moisture datasets. The model effectively captures temporal dynamics, making it valuable for applications requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By identifying typical peak soil moisture values and observing distribution shapes, irrigation can be scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a uniform irrigation strategy might be effective across multiple parcels, with adjustments based on specific parcel characteristics and historical data trends. The application of the LSTM model to predict soil moisture and vegetation indices yielded mixed results. While the model effectively captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately predicting EVI, NDVI, and NMDI.Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation monitoring
Procedia PDF Downloads 4119251 The Development of Space-Time and Space-Number Associations: The Role of Non-Symbolic vs. Symbolic Representations
Authors: Letizia Maria Drammis, Maria Antonella Brandimonte
Abstract:
The idea that people use space representations to think about time and number received support from several lines of research. However, how these representations develop in children and then shape space-time and space-number mappings is still a debated issue. In the present study, 40 children (20 pre-schoolers and 20 elementary-school children) performed 4 main tasks, which required the use of more concrete (non-symbolic) or more abstract (symbolic) space-time and space-number associations. In the non-symbolic conditions, children were required to order pictures of everyday-life events occurring in a specific temporal order (Temporal sequences) and of quantities varying in numerosity (Numerical sequences). In the symbolic conditions, they were asked to perform the typical time-to-position and number-to-position tasks by mapping time-related words and numbers onto lines. Results showed that children performed reliably better in the non-symbolic Time conditions than the symbolic Time conditions, independently of age, whereas only pre-schoolers performed worse in the Number-to-position task (symbolic) as compared to the Numerical sequence (non-symbolic) task. In addition, only older children mapped time-related words onto space following the typical left-right orientation, pre-schoolers’ performance being somewhat mixed. In contrast, mapping numbers onto space showed a clear left-right orientation, independently of age. Overall, these results indicate a cross-domain difference in the way younger and older children process time and number, with time-related tasks being more difficult than number-related tasks only when space-time tasks require symbolic representations.Keywords: space-time associations, space-number associations, orientation, children
Procedia PDF Downloads 33619250 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk
Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda
Abstract:
Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.Keywords: cheese fraud, milk, pasteurisation, TD-NMR
Procedia PDF Downloads 24319249 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: data mining, k-means, road traffic accidents, Waze, Weka
Procedia PDF Downloads 41719248 An Approach to Secure Mobile Agent Communication in Multi-Agent Systems
Authors: Olumide Simeon Ogunnusi, Shukor Abd Razak, Michael Kolade Adu
Abstract:
Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Until now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a 2-phase policy enforcement system was developed. The first phase constrains the external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.Keywords: agent communication, introspective agent, isolation of agent, policy enforcement system
Procedia PDF Downloads 29719247 Application of Smplify-X Algorithm with Enhanced Gender Classifier in 3D Human Pose Estimation
Authors: Jiahe Liu, Hongyang Yu, Miao Luo, Feng Qian
Abstract:
The widespread application of 3D human body reconstruction spans various fields. Smplify-X, an algorithm reliant on single-image input, employs three distinct body parameter templates, necessitating gender classification of individuals within the input image. Researchers employed a ResNet18 network to train a gender classifier within the Smplify-X framework, setting the threshold at 0.9, designating images falling below this threshold as having neutral gender. This model achieved 62.38% accurate predictions and 7.54% incorrect predictions. Our improvement involved refining the MobileNet network, resulting in a raised threshold of 0.97. Consequently, we attained 78.89% accurate predictions and a mere 0.2% incorrect predictions, markedly enhancing prediction precision and enabling more precise 3D human body reconstruction.Keywords: SMPLX, mobileNet, gender classification, 3D human reconstruction
Procedia PDF Downloads 9919246 Considering the Reliability of Measurements Issue in Distributed Adaptive Estimation Algorithms
Authors: Wael M. Bazzi, Amir Rastegarnia, Azam Khalili
Abstract:
In this paper we consider the issue of reliability of measurements in distributed adaptive estimation problem. To this aim, we assume a sensor network with different observation noise variance among the sensors and propose new estimation method based on incremental distributed least mean-square (IDLMS) algorithm. The proposed method contains two phases: I) Estimation of each sensors observation noise variance, and II) Estimation of the desired parameter using the estimated observation variances. To deal with the reliability of measurements, in the second phase of the proposed algorithm, the step-size parameter is adjusted for each sensor according to its observation noise variance. As our simulation results show, the proposed algorithm considerably improves the performance of the IDLMS algorithm in the same condition.Keywords: adaptive filter, distributed estimation, sensor network, IDLMS algorithm
Procedia PDF Downloads 63419245 Overview of Wireless Body Area Networks
Authors: Rashi Jain
Abstract:
The Wireless Body Area Networks (WBANs) is an emerging interdisciplinary area where small sensors are placed on/within the human body. These sensors monitor the physiological activities and vital statistics of the body. The data from these sensors is aggregated and communicated to a remote doctor for immediate attention or to a database for records. On 6 Feb 2012, the IEEE 802.15.6 task group approved the standard for Body Area Network (BAN) technologies. The standard proposes the physical and MAC layer for the WBANs. The work provides an introduction to WBANs and overview of the physical and MAC layers of the standard. The physical layer specifications have been covered. A comparison of different protocols used at MAC layer is drawn. An introduction to the network layer and security aspects of the WBANs is made. The WBANs suffer certain limitations such as regulation of frequency bands, minimizing the effect of transmission and reception of electromagnetic signals on the human body, maintaining the energy efficiency among others. This has slowed down their implementation.Keywords: vehicular networks, sensors, MicroController 8085, LTE
Procedia PDF Downloads 25919244 Validation of Solar PV Inverter Harmonics Behaviour at Different Power Levels in a Test Network
Authors: Wilfred Fritz
Abstract:
Grid connected solar PV inverters need to be compliant to standard regulations regarding unwanted harmonic generation. This paper gives an introduction to harmonics, solar PV inverter voltage regulation and balancing through compensation and investigates the behaviour of harmonic generation at different power levels. Practical measurements of harmonics and power levels with a power quality data logger were made, on a test network at a university in Germany. The test setup and test results are discussed. The major finding was that between the morning and afternoon load peak windows when the PV inverters operate under low solar insolation and low power levels, more unwanted harmonics are generated. This has a huge impact on the power quality of the grid as well as capital and maintenance costs. The design of a single-tuned harmonic filter towards harmonic mitigation is presented.Keywords: harmonics, power quality, pulse width modulation, total harmonic distortion
Procedia PDF Downloads 23919243 A One Dimensional Cdᴵᴵ Coordination Polymer: Synthesis, Structure and Properties
Authors: Z. Derikvand, M. Dusek, V. Eigner
Abstract:
One dimensional coordination polymer of Cdᴵᴵ based on pyrazine (pz) and 3-nitrophthalic acid (3-nphaH₂), namely poly[[diaqua bis(3-nitro-2-carboxylato-1-carboxylic acid)(µ₂-pyrazine) cadmium(II)]dihydrate], {[Cd(3-nphaH)2(pz)(H₂O)₂]. 2H₂O}ₙ was prepared and characterized. The asymmetric unit consists of one Cdᴵᴵ center, two (3-nphaH)– anions, two halves of two crystallographically distinct pz ligands, two coordinated and two uncoordinated water molecules. The Cdᴵᴵ cation is surrounded by four oxygen atoms from two (3-nphaH)– and two water molecules as well as two nitrogen atoms from two pz ligands in distorted octahedral geometry. Complicated hydrogen bonding network accompanied with N–O···π and C–O···π stacking interactions leads to formation of a 3D supramolecular network. Commonly, this kind of C–O–π and N–O···π interaction is detected in electron-rich CO/NO groups of (3-nphaH)– ligand and electron-deficient π-system of pyrazine.Keywords: supramolecular chemistry, Cd coordination polymer, crystal structure, 3-nithrophethalic acid
Procedia PDF Downloads 40119242 A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce
Authors: Jangje Park, So Young Kim
Abstract:
The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call.Keywords: audio frequency, buzz noise, ground bounce, microphone grounding
Procedia PDF Downloads 13619241 Anti-Corruption, an Important Challenge for the Construction Industry!
Authors: Ahmed Stifi, Sascha Gentes, Fritz Gehbauer
Abstract:
The construction industry is perhaps one of the oldest industry of the world. The ancient monuments like the egyptian pyramids, the temples of Greeks and Romans like Parthenon and Pantheon, the robust bridges, old Roman theatres, the citadels and many more are the best testament to that. The industry also has a symbiotic relationship with other . Some of the heavy engineering industry provide construction machineries, chemical industry develop innovative construction materials, finance sector provides fund solutions for complex construction projects and many more. Construction Industry is not only mammoth but also very complex in nature. Because of the complexity, construction industry is prone to various tribulations which may have the propensity to hamper its growth. The comparitive study of this industry with other depicts that it is associated with a state of tardiness and delay especially when we focus on the managerial aspects and the study of triple constraint (time, cost and scope). While some institutes says the complexity associated with it as a major reason, others like lean construction, refers to the wastes produced across the construction process as the prime reason. This paper introduces corruption as one of the prime factors for such delays.To support this many international reports and studies are available depicting that construction industry is one of the most corrupt sectors worldwide, and the corruption can take place throught the project cycle comprising project selection, planning, design, funding, pre-qualification, tendering, execution, operation and maintenance, and even through the reconstrction phase. It also happens in many forms such as bribe, fraud, extortion, collusion, embezzlement and conflict of interest and the self-sufficient. As a solution to cope the corruption in construction industry, the paper introduces the integrity as a key factor and build a new integrity framework to develop and implement an integrity management system for construction companies and construction projects.Keywords: corruption, construction industry, integrity, lean construction
Procedia PDF Downloads 37719240 Omni: Data Science Platform for Evaluate Performance of a LoRaWAN Network
Authors: Emanuele A. Solagna, Ricardo S, Tozetto, Roberto dos S. Rabello
Abstract:
Nowadays, physical processes are becoming digitized by the evolution of communication, sensing and storage technologies which promote the development of smart cities. The evolution of this technology has generated multiple challenges related to the generation of big data and the active participation of electronic devices in society. Thus, devices can send information that is captured and processed over large areas, but there is no guarantee that all the obtained data amount will be effectively stored and correctly persisted. Because, depending on the technology which is used, there are parameters that has huge influence on the full delivery of information. This article aims to characterize the project, currently under development, of a platform that based on data science will perform a performance and effectiveness evaluation of an industrial network that implements LoRaWAN technology considering its main parameters configuration relating these parameters to the information loss.Keywords: Internet of Things, LoRa, LoRaWAN, smart cities
Procedia PDF Downloads 14819239 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation
Authors: Carl van Walraven, Meltem Tuna
Abstract:
Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation
Procedia PDF Downloads 236