Search results for: least mean square algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5022

Search results for: least mean square algorithm

582 A Context Aware Mobile Learning System with a Cognitive Recommendation Engine

Authors: Jalal Maqbool, Gyu Myoung Lee

Abstract:

Using smart devices for context aware mobile learning is becoming increasingly popular. This has led to mobile learning technology becoming an indispensable part of today’s learning environment and platforms. However, some fundamental issues remain - namely, mobile learning still lacks the ability to truly understand human reaction and user behaviour. This is due to the fact that current mobile learning systems are passive and not aware of learners’ changing contextual situations. They rely on static information about mobile learners. In addition, current mobile learning platforms lack the capability to incorporate dynamic contextual situations into learners’ preferences. Thus, this thesis aims to address these issues highlighted by designing a context aware framework which is able to sense learner’s contextual situations, handle data dynamically, and which can use contextual information to suggest bespoke learning content according to a learner’s preferences. This is to be underpinned by a robust recommendation system, which has the capability to perform these functions, thus providing learners with a truly context-aware mobile learning experience, delivering learning contents using smart devices and adapting to learning preferences as and when it is required. In addition, part of designing an algorithm for the recommendation engine has to be based on learner and application needs, personal characteristics and circumstances, as well as being able to comprehend human cognitive processes which would enable the technology to interact effectively and deliver mobile learning content which is relevant, according to the learner’s contextual situations. The concept of this proposed project is to provide a new method of smart learning, based on a capable recommendation engine for providing an intuitive mobile learning model based on learner actions.

Keywords: aware, context, learning, mobile

Procedia PDF Downloads 236
581 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 412
580 Understanding the Diversity of Antimicrobial Resistance among Wild Animals, Livestock and Associated Environment in a Rural Ecosystem in Sri Lanka

Authors: B. M. Y. I. Basnayake, G. G. T. Nisansala, P. I. J. B. Wijewickrama, U. S. Weerathunga, K. W. M. Y. D. Gunasekara, N. K. Jayasekera, A. W. Kalupahana, R. S. Kalupahana, A. Silva- Fletcher, K. S. A. Kottawatta

Abstract:

Antimicrobial resistance (AMR) has attracted significant attention worldwide as an emerging threat to public health. Understanding the role of livestock and wildlife with the shared environment in the maintenance and transmission of AMR is of utmost importance due to its interactions with humans for combating the issue in one health approach. This study aims to investigate the extent of AMR distribution among wild animals, livestock, and environment cohabiting in a rural ecosystem in Sri Lanka: Hambegamuwa. One square km area at Hambegamuwa was mapped using GPS as the sampling area. The study was conducted for a period of five months from November 2020. Voided fecal samples were collected from 130 wild animals, 123 livestock: buffalo, cattle, chicken, and turkey, with 36 soil and 30 water samples associated with livestock and wildlife. From the samples, Escherichia coli (E. coli) was isolated, and their AMR profiles were investigated for 12 antimicrobials using the disk diffusion method following the CLSI standard. Seventy percent (91/130) of wild animals, 93% (115/123) of livestock, 89% (32/36) of soil, and 63% (19/30) of water samples were positive for E. coli. Maximum of two E. coli from each sample to a total of 467 were tested for the sensitivity of which 157, 208, 62, and 40 were from wild animals, livestock, soil, and water, respectively. The highest resistance in E. coli from livestock (13.9%) and wild animals (13.3%) was for ampicillin, followed by streptomycin. Apart from that, E. coli from livestock and wild animals revealed resistance mainly against tetracycline, cefotaxime, trimethoprim/ sulfamethoxazole, and nalidixic acid at levels less than 10%. Ten cefotaxime resistant E. coli were reported from wild animals, including four elephants, two land monitors, a pigeon, a spotted dove, and a monkey which was a significant finding. E. coli from soil samples reflected resistance primarily against ampicillin, streptomycin, and tetracycline at levels less than in livestock/wildlife. Two water samples had cefotaxime resistant E. coli as the only resistant isolates out of 30 water samples tested. Of the total E. coli isolates, 6.4% (30/467) was multi-drug resistant (MDR) which included 18, 9, and 3 isolates from livestock, wild animals, and soil, respectively. Among 18 livestock MDRs, the highest (13/ 18) was from poultry. Nine wild animal MDRs were from spotted dove, pigeon, land monitor, and elephant. Based on CLSI standard criteria, 60 E. coli isolates, of which 40, 16, and 4 from livestock, wild animal, and environment, respectively, were screened for Extended Spectrum β-Lactamase (ESBL) producers. Despite being a rural ecosystem, AMR and MDR are prevalent even at low levels. E. coli from livestock, wild animals, and the environment reflected a similar spectrum of AMR where ampicillin, streptomycin, tetracycline, and cefotaxime being the predominant antimicrobials of resistance. Wild animals may have acquired AMR via direct contact with livestock or via the environment, as antimicrobials are rarely used in wild animals. A source attribution study including the effects of the natural environment to study AMR can be proposed as this less contaminated rural ecosystem alarms the presence of AMR.

Keywords: AMR, Escherichia coli, livestock, wildlife

Procedia PDF Downloads 198
579 Development of a Geomechanical Risk Assessment Model for Underground Openings

Authors: Ali Mortazavi

Abstract:

The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).

Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering

Procedia PDF Downloads 134
578 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks

Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin

Abstract:

Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.

Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network

Procedia PDF Downloads 121
577 Vibration Analysis of Stepped Nanoarches with Defects

Authors: Jaan Lellep, Shahid Mubasshar

Abstract:

A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.

Keywords: crack, nanoarches, natural frequency, step

Procedia PDF Downloads 120
576 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model

Authors: Aid Abdelkrim

Abstract:

A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.

Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading

Procedia PDF Downloads 384
575 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms

Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.

Abstract:

Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.

Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment

Procedia PDF Downloads 391
574 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm

Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy

Abstract:

IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.

Keywords: IoT, fog networks, data stewardship, dynamic access policy

Procedia PDF Downloads 47
573 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation

Procedia PDF Downloads 320
572 Clustering for Detection of the Population at Risk of Anticholinergic Medication

Authors: A. Shirazibeheshti, T. Radwan, A. Ettefaghian, G. Wilson, C. Luca, Farbod Khanizadeh

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature, which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on over 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. To further evaluate the performance of the model, any association between the average risk score within each group and other factors such as socioeconomic status (i.e., Index of Multiple Deprivation) and an index of health and disability were investigated. The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings also show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, indicating that females are more at risk from this kind of multiple medications. The risk may be monitored and controlled in well artificial intelligence-equipped healthcare management systems.

Keywords: anticholinergic medicines, clustering, deprivation, socioeconomic status

Procedia PDF Downloads 194
571 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review

Authors: Faisal Muhibuddin, Ani Dijah Rahajoe

Abstract:

This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.

Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review

Procedia PDF Downloads 53
570 Potential Impacts of Climate Change on Hydrological Droughts in the Limpopo River Basin

Authors: Nokwethaba Makhanya, Babatunde J. Abiodun, Piotr Wolski

Abstract:

Climate change possibly intensifies hydrological droughts and reduces water availability in river basins. Despite this, most research on climate change effects in southern Africa has focused exclusively on meteorological droughts. This thesis projects the potential impact of climate change on the future characteristics of hydrological droughts in the Limpopo River Basin (LRB). The study uses regional climate model (RCM) measurements (from the Coordinated Regional Climate Downscaling Experiment, CORDEX) and a combination of hydrological simulations (using the Soil and Water Assessment Tool Plus model, SWAT+) to predict the impacts at four global warming levels (GWLs: 1.5℃, 2.0℃, 2.5℃, and 3.0℃) under the RCP8.5 future climate scenario. The SWAT+ model was calibrated and validated with a streamflow dataset observed over the basin, and the sensitivity of model parameters was investigated. The performance of the SWAT+LRB model was verified using the Nash-Sutcliffe efficiency (NSE), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and coefficient of determination (R²). The Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Precipitation Index (SPI) have been used to detect meteorological droughts. The Soil Water Index (SSI) has been used to define agricultural drought, while the Water Yield Drought Index (WYLDI), the Surface Run-off Index (SRI), and the Streamflow Index (SFI) have been used to characterise hydrological drought. The performance of the SWAT+ model simulations over LRB is sensitive to the parameters CN2 (initial SCS runoff curve number for moisture condition II) and ESCO (soil evaporation compensation factor). The best simulation generally performed better during the calibration period than the validation period. In calibration and validation periods, NSE is ≤ 0.8, while PBIAS is ≥ ﹣80.3%, RMSE ≥ 11.2 m³/s, and R² ≤ 0.9. The simulations project a future increase in temperature and potential evapotranspiration over the basin, but they do not project a significant future trend in precipitation and hydrological variables. However, the spatial distribution of precipitation reveals a projected increase in precipitation in the southern part of the basin and a decline in the northern part of the basin, with the region of reduced precipitation projected to increase with GWLs. A decrease in all hydrological variables is projected over most parts of the basin, especially over the eastern part of the basin. The simulations predict meteorological droughts (i.e., SPEI and SPI), agricultural droughts (i.e., SSI), and hydrological droughts (i.e., WYLDI, SRI) would become more intense and severe across the basin. SPEI-drought has a greater magnitude of increase than SPI-drought, and agricultural and hydrological droughts have a magnitude of increase between the two. As a result, this research suggests that future hydrological droughts over the LRB could be more severe than the SPI-drought projection predicts but less severe than the SPEI-drought projection. This research can be used to mitigate the effects of potential climate change on basin hydrological drought.

Keywords: climate change, CORDEX, drought, hydrological modelling, Limpopo River Basin

Procedia PDF Downloads 118
569 Improved Food Security and Alleviation of Cyanide Intoxication through Commercialization and Utilization of Cassava Starch by Tanzania Industries

Authors: Mariam Mtunguja, Henry Laswai, Yasinta Muzanilla, Joseph Ndunguru

Abstract:

Starchy tuberous roots of cassava provide food for people but also find application in various industries. Recently there has been the focus of concentrated research efforts to fully exploit its potential as a sustainable multipurpose crop. High starch yield is the important trait for commercial cassava production for the starch industries. Furthermore, cyanide present in cassava root poses a health challenge in the use of cassava for food. Farming communities where cassava is a staple food, prefer bitter (high cyanogenic) varieties as protection from predators and thieves. As a result, food insecure farmers prefer growing bitter cassava. This has led to cyanide intoxication to this farming communities. Cassava farmers can benefit from marketing cassava to starch producers thereby improving their income and food security. This will decrease dependency on cassava as staple food as a result of increased income and be able to afford other food sources. To achieve this, adequate information is required on the right cassava cultivars and appropriate harvesting period so as to maximize cassava production and profitability. This study aimed at identifying suitable cassava cultivars and optimum time of harvest to maximize starch production. Six commonly grown cultivars were identified and planted in a complete random block design and further analysis was done to assess variation in physicochemical characteristics, starch yield and cyanogenic potentials across three environments. The analysis showed that there is a difference in physicochemical characteristics between landraces (p ≤ 0.05), and can be targeted to different industrial applications. Among landraces, dry matter (30-39%), amylose (11-19%), starch (74-80%) and reducing sugars content (1-3%) varied when expressed on a dry weight basis (p ≤ 0.05); however, only one of the six genotypes differed in crystallinity and mean starch granule particle size, while glucan chain distribution and granule morphology were the same. In contrast, the starch functionality features measured: swelling power, solubility, syneresis, and digestibility differed (p ≤ 0.05). This was supported by Partial least square discriminant analysis (PLS-DA), which highlighted the divergence among the cassavas based on starch functionality, permitting suggestions for the targeted uses of these starches in diverse industries. The study also illustrated genotypic difference in starch yield and cyanogenic potential. Among landraces, Kiroba showed potential for maximum starch yield (12.8 t ha-1) followed by Msenene (12.3 t ha-1) and third was Kilusungu (10.2 t ha-1). The cyanide content of cassava landraces was between 15 and 800 ppm across all trial sites. GGE biplot analysis further confirmed that Kiroba was a superior cultivar in terms of starch yield. Kilusungu had the highest cyanide content and average starch yield, therefore it can also be suitable for use in starch production.

Keywords: cyanogen, cassava starch, food security, starch yield

Procedia PDF Downloads 207
568 Determinants of Walking among Middle-Aged and Older Overweight and Obese Adults: Demographic, Health, and Socio-Environmental Factors

Authors: Samuel N. Forjuoh, Marcia G. Ory, Jaewoong Won, Samuel D. Towne, Suojin Wang, Chanam Lee

Abstract:

The public health burden of obesity is well established as is the influence of physical activity (PA) on the health and wellness of individuals who are obese. This study examined the influence of selected demographic, health, and socioenvironmental factors on the walking behaviors of middle-aged and older overweight and obese adults. Online and paper surveys were administered to community-dwelling overweight and obese adults aged ≥ 50 years residing in four cities in central Texas and seen by a family physician in the primary care clinic from October 2013 to June 2014. Descriptive statistics were used to characterize participants’ anthropometric and demographic data as well as their health conditions and walking, socioenvironmental, and more broadly defined PA behaviors. Then Pearson chi-square tests were used to assess differences between participants who reported walking the recommended ≥ 150 minutes for any purpose in a typical week as a proxy to meeting the U.S. Centers for Disease Control and Prevention’s PA guidelines and those who did not. Finally, logistic regression was used to predict walking the recommended ≥ 150 minutes for any purpose, controlling for covariates. The analysis was conducted in 2016. Of the total sample (n=253, survey response rate of 6.8%), the majority were non-Hispanic white (81.7%), married (74.5%), male (53.5%), and reported an annual household income of ≥ $50,000 (65.7%). Approximately, half were employed (49.6%), or had at least a college degree (51.8%). Slightly more than 1 in 5 (n=57, 22.5%) reported walking the recommended ≥150 minutes for any purpose in a typical week. The strongest predictors of walking the recommended ≥ 150 minutes for any purpose in a typical week in adjusted analysis were related to education and a high favorable perception of the neighborhood environment. Compared to those with a high school diploma or some college, participants with at least a college degree were five times as likely to walk the recommended ≥ 150 minutes for any purpose (OR=5.55, 95% CI=1.79-17.25). Walking the recommended ≥ 150 minutes for any purpose was significantly associated with participants who disagreed that there were many distracted drivers (e.g., on the cell phone while driving) in their neighborhood (OR=4.08, 95% CI=1.47-11.36) and those who agreed that there are sidewalks or protected walkways (e.g., walking trails) in their neighborhood (OR=3.55, 95% CI=1.10-11.49). Those employed were less likely to walk the recommended ≥ 150 minutes for any purpose compared to those unemployed (OR=0.31, 95% CI=0.11-0.85) as were those who reported some difficulty walking for a quarter of a mile (OR=0.19, 95% CI=0.05-0.77). Other socio-environmental factors such as having care-giver responsibilities for elders, someone to walk with, or a dog in the household as well as Walk Score™ were not significantly associated with walking the recommended ≥ 150 minutes for any purpose in a typical week. Neighborhood perception appears to be an important factor associated with the walking behaviors of middle-aged and older overweight and obese individuals. Enhancing the neighborhood environment (e.g., providing walking trails) may promote walking among these individuals.

Keywords: determinants of walking, obesity, older adults, physical activity

Procedia PDF Downloads 245
567 Problem Solving in Mathematics Education: A Case Study of Nigerian Secondary School Mathematics Teachers’ Conceptions in Relation to Classroom Instruction

Authors: Carol Okigbo

Abstract:

Mathematical problem solving has long been accorded an important place in mathematics curricula at every education level in both advanced and emerging economies. Its classroom approaches have varied, such as teaching for problem-solving, teaching about problem-solving, and teaching mathematics through problem-solving. It requires engaging in tasks for which the solution methods are not eminent, making sense of problems and persevering in solving them by exhibiting processes, strategies, appropriate attitude, and adequate exposure. Teachers play important roles in helping students acquire competency in problem-solving; thus, they are expected to be good problem-solvers and have proper conceptions of problem-solving. Studies show that teachers’ conceptions influence their decisions about what to teach and how to teach. Therefore, how teachers view their roles in teaching problem-solving will depend on their pedagogical conceptions of problem-solving. If teaching problem-solving is a major component of secondary school mathematics instruction, as recommended by researchers and mathematics educators, then it is necessary to establish teachers’ conceptions, what they do, and how they approach problem-solving. This study is designed to determine secondary school teachers’ conceptions regarding mathematical problem solving, its current situation, how teachers’ conceptions relate to their demographics, as well as the interaction patterns in the mathematics classroom. There have been many studies of mathematics problem solving, some of which addressed teachers’ conceptions using single-method approaches, thereby presenting only limited views of this important phenomenon. To address the problem more holistically, this study adopted an integrated mixed methods approach which involved a quantitative survey, qualitative analysis of open-ended responses, and ethnographic observations of teachers in class. Data for the analysis came from a random sample of 327 secondary school mathematics teachers in two Nigerian states - Anambra State and Enugu State who completed a 45-item questionnaire. Ten of the items elicited demographic information, 11 items were open-ended questions, and 25 items were Likert-type questions. Of the 327 teachers who responded to the questionnaires, 37 were randomly selected and observed in their classes. Data analysis using ANOVA, t-tests, chi-square tests, and open coding showed that the teachers had different conceptions about problem-solving, which fall into three main themes: practice on exercises and word application problems, a process of solving mathematical problems, and a way of teaching mathematics. Teachers reported that no period is set aside for problem-solving; typically, teachers solve problems on the board, teach problem-solving strategies, and allow students time to struggle with problems on their own. The result shows a significant difference between male and female teachers’ conception of problems solving, a significant relationship among teachers’ conceptions and academic qualifications, and teachers who have spent ten years or more teaching mathematics were significantly different from the group with seven to nine years of experience in terms of their conceptions of problem-solving.

Keywords: conceptions, education, mathematics, problem solving, teacher

Procedia PDF Downloads 60
566 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based on Li-Ion Battery and Solar Energy Supply

Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan

Abstract:

Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries. In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.

Keywords: ZigBee, Li-ion battery, solar panel, CC2530

Procedia PDF Downloads 367
565 Fiberoptic Intubation Skills Training Improves Emergency Medicine Resident Comfort Using Modality

Authors: Nicholus M. Warstadt, Andres D. Mallipudi, Oluwadamilola Idowu, Joshua Rodriguez, Madison M. Hunt, Soma Pathak, Laura P. Weber

Abstract:

Endotracheal intubation is a core procedure performed by emergency physicians. This procedure is a high risk, and failure results in substantial morbidity and mortality. Fiberoptic intubation (FOI) is the standard of care in difficult airway protocols, yet no widespread practice exists for training emergency medicine (EM) residents in the technical acquisition of FOI skills. Simulation on mannequins is commonly utilized to teach advanced airway techniques. As part of a program to introduce FOI into our ED, residents received hands-on training in FOI as part of our weekly resident education conference. We hypothesized that prior to the hands-on training, residents had little experience with FOI and were uncomfortable with using fiberoptic as a modality. We further hypothesized that resident comfort with FOI would increase following the training. The education intervention consisted of two hours of focused airway teaching and skills acquisition for PGY 1-4 residents. One hour was dedicated to four case-based learning stations focusing on standard, pediatric, facial trauma, and burn airways. Direct, video, and fiberoptic airway equipment were available to use at the residents’ discretion to intubate mannequins at each station. The second hour involved direct instructor supervision and immediate feedback during deliberate practice for FOI of a mannequin. Prior to the hands-on training, a pre-survey was sent via email to all EM residents at NYU Grossman School of Medicine. The pre-survey asked how many FOI residents have performed in the ED, OR, and on a mannequin. The pre-survey and a post-survey asked residents to rate their comfort with FOI on a 5-point Likert scale ("extremely uncomfortable", "somewhat uncomfortable", "neither comfortable nor uncomfortable", "somewhat comfortable", and "extremely comfortable"). The post-survey was administered on site immediately following the training. A two-sample chi-square test of independence was calculated comparing self-reported resident comfort on the pre- and post-survey (α ≤ 0.05). Thirty-six of a total of 70 residents (51.4%) completed the pre-survey. Of pre-survey respondents, 34 residents (94.4%) had performed 0, 1 resident (2.8%) had performed 1, and 1 resident (2.8%) had performed 2 FOI in the ED. Twenty-five residents (69.4%) had performed 0, 6 residents (16.7%) had performed 1, 2 residents (5.6%) had performed 2, 1 resident (2.8%) had performed 3, and 2 residents (5.6%) had performed 4 FOI in the OR. Seven residents (19.4%) had performed 0, and 16 residents (44.4%) had performed 5 or greater FOI on a mannequin. 29 residents (41.4%) attended the hands-on training, and 27 out of 29 residents (93.1%) completed the post-survey. Self-reported resident comfort with FOI significantly increased in post-survey compared to pre-survey questionnaire responses (p = 0.00034). Twenty-one of 27 residents (77.8%) report being “somewhat comfortable” or “extremely comfortable” with FOI on the post-survey, compared to 9 of 35 residents (25.8%) on the pre-survey. We show that dedicated FOI training is associated with increased learner comfort with such techniques. Further direction includes studying technical competency, skill retention, translation to direct patient care, and optimal frequency and methodology of future FOI education.

Keywords: airway, emergency medicine, fiberoptic intubation, medical simulation, skill acquisition

Procedia PDF Downloads 172
564 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate

Procedia PDF Downloads 235
563 HLA-DPB1 Matching on the Outcome of Unrelated Donor Hematopoietic Stem Cell Transplantation

Authors: Shi-xia Xu, Zai-wen Zhang, Ru-xue Chen, Shan Zhou, Xiang-feng Tang

Abstract:

Objective: The clinical influence of HLA-DPB1 mismatches on clinical outcome of HSCT is less clear. This is the first meta-analysis to study the HLA-DPB1 matching statues on clinical outcomes after unrelated donor HSCT. Methods: We searched the CIBMTR, Cochrane Central Register of Controlled Trials (CENTRAL) and related databases (1995.01–2017.06) for all relevant articles. Comparative studies were used to investigate the HLA-DPB1 loci mismatches on clinical outcomes after unrelated donor HSCT, such as the disease-free survival (DFS), overall survival, GVHD, relapse, and transplant-related mortality (TRM). We performed meta-analysis using Review Manager 5.2 software and funnel plot to assess the bias. Results: At first, 1246 articles were retrieved, and 18 studies totaling 26368 patients analyzed. Pooled comparisons of studies found that the HLA-DPB1 mismatched group had a lower rate of DFS than the DPB1-matched group, and lower OS in non-T cell depleted transplantation. The DPB1 mismatched group has a higher incidence of aGVHD and more severe ( ≥ III degree) aGvHD, lower rate of relapse and higher TRM. Moreover, compared with 1-antigen mismatch, 2-antigen mismatched led to a higher risk of TRM and lower relapse rate. Conclusions: This meta-analysis indicated HLA-DPB1 has important influence on survival and transplant-related complications during unrelated donor HSCT and HLA-DPB1 donor selection strategies have been proposed based on a personalized algorithm.

Keywords: human leukocyte antigen, DPB1, transplant, meta-analysis, outcome

Procedia PDF Downloads 284
562 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modelling and Solving

Authors: Yasin Tadayonrad

Abstract:

Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading /unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is loading/unloading capacity in each source/ destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.

Keywords: supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming

Procedia PDF Downloads 85
561 A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System

Authors: Mulugeta K. Tefera, Xiaolong Yang, Jian Liu

Abstract:

Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used.

Keywords: background modeling, Gaussian mixture model, inter-frame difference, object detection and tracking, video surveillance

Procedia PDF Downloads 463
560 Palliative Care Referral Behavior Among Nurse Practitioners in Hospital Medicine

Authors: Sharon Jackson White

Abstract:

Purpose: Nurse practitioners (NPs) practicing within hospital medicine play a significant role in caring for patients who might benefit from palliative care (PC) services. Using the Theory of Planned Behavior, the purpose of this study was to examine the relationships among facilitators to referral, barriers to referral, self-efficacy with end-of-life discussions, history of referral, and referring to PC among NPs in hospital medicine. Hypotheses: 1) Perceived facilitators to referral will be associated with a higher history of referral and a higher number of referrals to PC. 2) Perceived barriers to referral will be associated with a lower history of referral and a lower number of referrals to PC. 3) Increased self-efficacy with end-of-life discussions will be associated with a higher history of referral and a higher number of referrals to PC. 4) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the history of referral to PC. 5) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the number of referrals to PC. Significance: Previous studies of referring patients to PC within the hospital setting care have focused on physician practices. Identifying factors that influence NPs referring hospitalized patients to PC is essential to ensure that patients have access to these important services. This study incorporates the SNRS mission of advancing nursing research through the dissemination of research findings and the promotion of nursing science. Methods: A cross-sectional, predictive correlational study was conducted. History of referral to PC, facilitators to referring to PC, barriers to referring to PC, self-efficacy in end-of-life discussions, and referral to PC were measured using the PC referral case study survey, facilitators and barriers to PC referral survey, and self-assessment with end-of-life discussions survey. Data were analyzed descriptively and with Pearson’s Correlation, Spearman’s Rho, point-biserial correlation, multiple regression, logistic regression, Chi-Square test, and the Mann-Whitney U test. Results: Only one facilitator (PC team being helpful with establishing goals of care) was significantly associated with referral to PC. Three variables were statistically significant in relation to the history of referring to PC: “Inclined to refer: PC can help decrease the length of stay in hospital”, “Most inclined to refer: Patients with serious illnesses and/or poor prognoses”, and “Giving bad news to a patient or family member”. No predictor variables contributed a significant variance in the number of referrals to PC for all three case studies. There were no statistically significant results showing a relationship between the history of referral and referral to PC. All five hypotheses were partially supported. Discussion: Findings from this study emphasize the need for further research on NPs who work in hospital settings and what factors influence their behaviors of referring to PC. Since there is an increase in NPs practicing within hospital settings, future studies should use a larger sample size and incorporate hospital medicine NPs and other types of NPs that work in hospitals.

Keywords: palliative care, nurse practitioners, hospital medicine, referral

Procedia PDF Downloads 59
559 Modelling of Groundwater Resources for Al-Najaf City, Iraq

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.

Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW

Procedia PDF Downloads 201
558 Predicting Emerging Agricultural Investment Opportunities: The Potential of Structural Evolution Index

Authors: Kwaku Damoah

Abstract:

The agricultural sector is characterized by continuous transformation, driven by factors such as demographic shifts, evolving consumer preferences, climate change, and migration trends. This dynamic environment presents complex challenges for key stakeholders including farmers, governments, and investors, who must navigate these changes to achieve optimal investment returns. To effectively predict market trends and uncover promising investment opportunities, a systematic, data-driven approach is essential. This paper introduces the Structural Evolution Index (SEI), a machine learning-based methodology. SEI is specifically designed to analyse long-term trends and forecast the potential of emerging agricultural products for investment. Versatile in application, it evaluates various agricultural metrics such as production, yield, trade, land use, and consumption, providing a comprehensive view of the evolution within agricultural markets. By harnessing data from the UN Food and Agricultural Organisation (FAOSTAT), this study demonstrates the SEI's capabilities through Comparative Exploratory Analysis and evaluation of international trade in agricultural products, focusing on Malaysia and Singapore. The SEI methodology reveals intricate patterns and transitions within the agricultural sector, enabling stakeholders to strategically identify and capitalize on emerging markets. This predictive framework is a powerful tool for decision-makers, offering crucial insights that help anticipate market shifts and align investments with anticipated returns.

Keywords: agricultural investment, algorithm, comparative exploratory analytics, machine learning, market trends, predictive analytics, structural evolution index

Procedia PDF Downloads 51
557 Anti-Forensic Countermeasure: An Examination and Analysis Extended Procedure for Information Hiding of Android SMS Encryption Applications

Authors: Ariq Bani Hardi

Abstract:

Empowerment of smartphone technology is growing very rapidly in various fields of science. One of the mobile operating systems that dominate the smartphone market today is Android by Google. Unfortunately, the expansion of mobile technology is misused by criminals to hide the information that they store or exchange with each other. It makes law enforcement more difficult to prove crimes committed in the judicial process (anti-forensic). One of technique that used to hide the information is encryption, such as the usages of SMS encryption applications. A Mobile Forensic Examiner or an investigator should prepare a countermeasure technique if he finds such things during the investigation process. This paper will discuss an extension procedure if the investigator found unreadable SMS in android evidence because of encryption. To define the extended procedure, we create and analyzing a dataset of android SMS encryption application. The dataset was grouped by application characteristics related to communication permissions, as well as the availability of source code and the documentation of encryption scheme. Permissions indicate the possibility of how applications exchange the data and keys. Availability of the source code and the encryption scheme documentation can show what the cryptographic algorithm specification is used, how long the key length, how the process of key generation, key exchanges, encryption/decryption is done, and other related information. The output of this paper is an extended or alternative procedure for examination and analysis process of android digital forensic. It can be used to help the investigators while they got a confused cause of SMS encryption during examining and analyzing. What steps should the investigator take, so they still have a chance to discover the encrypted SMS in android evidence?

Keywords: anti-forensic countermeasure, SMS encryption android, examination and analysis, digital forensic

Procedia PDF Downloads 121
556 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 305
555 A Hybrid Energy Storage Module for the Emergency Energy System of the Community Shelter in Yucatán, México

Authors: María Reveles-Miranda, Daniella Pacheco-Catalán

Abstract:

Sierra Papacal commissary is located north of Merida, Yucatan, México, where the indigenous Maya population predominates. Due to its location, the region has an elevation of fewer than 4.5 meters above sea level, with a high risk of flooding associated with storms and hurricanes and a high vulnerability of infrastructure and housing in the presence of strong gusts of wind. In environmental contingencies, the challenge is providing an autonomous electrical supply using renewable energy sources that cover vulnerable populations' health, food, and water pumping needs. To address this challenge, a hybrid energy storage module is proposed for the emergency photovoltaic (PV) system of the community shelter in Sierra Papacal, Yucatán, which combines high-energy-density batteries and high-power-density supercapacitors (SC) in a single module, providing a quick response to energy demand, reducing the thermal stress on batteries and extending their useful life. Incorporating SC in energy storage modules can provide fast response times to power variations and balanced energy extraction, ensuring a more extended period of electrical supply to vulnerable populations during contingencies. The implemented control strategy increases the module's overall performance by ensuring the optimal use of devices and balanced energy exploitation. The operation of the module with the control algorithm is validated with MATLAB/Simulink® and experimental tests.

Keywords: batteries, community shelter, environmental contingencies, hybrid energy storage, isolated photovoltaic system, supercapacitors

Procedia PDF Downloads 71
554 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 322
553 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 301