Search results for: dynamic Bayesian networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6555

Search results for: dynamic Bayesian networks

3915 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study

Authors: Mohamed H. Khalil

Abstract:

Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.

Keywords: GIS Web-Based, base-map, water network, decision support system

Procedia PDF Downloads 69
3914 Comparative Effect of Self-Myofascial Release as a Warm-Up Exercise on Functional Fitness of Young Adults

Authors: Gopal Chandra Saha, Sumanta Daw

Abstract:

Warm-up is an essential component for optimizing performance in various sports before a physical fitness training session. This study investigated the immediate comparative effect of Self-Myofascial Release through vibration rolling (VR), non-vibration rolling (NVR), and static stretching as a part of a warm-up treatment on the functional fitness of young adults. Functional fitness is a classification of training that prepares the body for real-life movements and activities. For the present study 20male physical education students were selected as subjects. The age of the subjects was ranged from 20-25 years. The functional fitness variables undertaken in the present study were flexibility, muscle strength, agility, static and dynamic balance of the lower extremity. Each of the three warm-up protocol was administered on consecutive days, i.e. 24 hr time gap and all tests were administered in the morning. The mean and SD were used as descriptive statistics. The significance of statistical differences among the groups was measured by applying ‘F’-test, and to find out the exact location of difference, Post Hoc Test (Least Significant Difference) was applied. It was found from the study that only flexibility showed significant difference among three types of warm-up exercise. The observed result depicted that VR has more impact on myofascial release in flexibility in comparison with NVR and stretching as a part of warm-up exercise as ‘p’ value was less than 0.05. In the present study, within the three means of warm-up exercises, vibration roller showed better mean difference in terms of NVR, and static stretching exercise on functional fitness of young physical education practitioners, although the results were found insignificant in case of muscle strength, agility, static and dynamic balance of the lower extremity. These findings suggest that sports professionals and coaches may take VR into account for designing more efficient and effective pre-performance routine for long term to improve exercise performances. VR has high potential to interpret into an on-field practical application means.

Keywords: self-myofascial release, functional fitness, foam roller, physical education

Procedia PDF Downloads 117
3913 Video Summarization: Techniques and Applications

Authors: Zaynab El Khattabi, Youness Tabii, Abdelhamid Benkaddour

Abstract:

Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research.

Keywords: video summarization, static summarization, video skimming, semantic features

Procedia PDF Downloads 378
3912 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 98
3911 A Survey of Attacks and Security Requirements in Wireless Sensor Networks

Authors: Vishnu Pratap Singh Kirar

Abstract:

Wireless sensor network (WSN) is a network of many interconnected networked systems, they equipped with energy resources and they are used to detect other physical characteristics. On WSN, there are many researches are performed in past decades. WSN applicable in many security systems govern by military and in many civilian related applications. Thus, the security of WSN gets attention of researchers and gives an opportunity for many future aspects. Still, there are many other issues are related to deployment and overall coverage, scalability, size, energy efficiency, quality of service (QoS), computational power and many more. In this paper we discus about various applications and security related issue and requirements of WSN.

Keywords: wireless sensor network (WSN), wireless network attacks, wireless network security, security requirements

Procedia PDF Downloads 463
3910 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.

Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design

Procedia PDF Downloads 198
3909 A Survey on Positive Real and Strictly Positive Real Scalar Transfer Functions

Authors: Mojtaba Hakimi-Moghaddam

Abstract:

Positive real and strictly positive real transfer functions are important concepts in the control theory. In this paper, the results of researches in these areas are summarized. Definitions together with their graphical interpretations are mentioned. The equivalent conditions in the frequency domain and state space representations are reviewed. Their equivalent electrical networks are explained. Also, a comprehensive discussion about a difference between behavior of real part of positive real and strictly positive real transfer functions in high frequencies is presented. Furthermore, several illustrative examples are given.

Keywords: real rational transfer functions, positive realness property, strictly positive realness property, equivalent conditions

Procedia PDF Downloads 366
3908 Optimization Process for Ride Quality of a Nonlinear Suspension Model Based on Newton-Euler’ Augmented Formulation

Authors: Mohamed Belhorma, Aboubakar S. Bouchikhi, Belkacem Bounab

Abstract:

This paper addresses modeling a Double A-Arm suspension, a three-dimensional nonlinear model has been developed using the multibody systems formalism. Dynamical study of the different components responses was done, particularly for the wheel assembly. To validate those results, the system was constructed and simulated by RecurDyn, a professional multibody dynamics simulation software. The model has been used as the Objectif function in an optimization algorithm for ride quality improvement.

Keywords: double A-Arm suspension, multibody systems, ride quality optimization, dynamic simulation

Procedia PDF Downloads 118
3907 Resourcing for Post-Disaster Housing Reconstruction: The Case of Cyclone Sidr and Aila in Bangladesh

Authors: Zahidul Islam

Abstract:

This study investigates the effectiveness of resourcing in post-disaster housing reconstruction with reference to Cyclones Sidr and Aila in Bangladesh. Through evaluating three key theories- Build Back Better approach, Balance Scorecard approach and Dynamic Competency theories, the synthesis of literature, and empirical fieldwork, this research develops a dynamic theoretical framework that moves the trajectory of post-disaster housing reconstruction towards the reconstruction of more resilient houses. The ultimate goal of any post-disaster housing reconstruction project is to provide quality houses and to achieve high levels of satisfaction for beneficiaries. However, post-disaster reconstruction projects often fail in their stated objectives; only 10-20% housing needs are met, with most houses constructed on a temporary rather than permanent basis. A number of scholars have argued that access to resources can significantly increase the capacity and capability of disaster victims to rebuild their lives, including the construction of new homes. This study draws on structured interviews of 285 villagers affected by cyclones to investigate the effectiveness of resourcing in rebuilding houses after Cyclone Sidr in 2007 and Cyclone Aila in 2009. Furthermore, semi-structured interviews were conducted with 20 key stakeholders in UNDP, Oxfam, government officials, and national and international NGOs. The results of this study show that recovery rate of cyclone resilient houses that can withstand cyclone is very low and majority of the population are still vulnerable. Furthermore, hierarchical regression of survey data and thematic analyses of qualitative data indicate that access to resources, level of education, quality of building materials and income generating activities of the respondents are critical for effective post-disaster recovery. Conversely, resource availability, lack of coordination among participant organisations, corruption and lack of access to appropriate land constituted significant obstacles to livelihood recovery. Finally, this study makes significant theoretical contributions to theories of post-disaster recovery by introducing new variables and measures for evaluating the quality and effectiveness of post-disaster housing.

Keywords: disaster, resourcing, housing, resilience

Procedia PDF Downloads 126
3906 A Conceptual Model of Sex Trafficking Dynamics in the Context of Pandemics and Provisioning Systems

Authors: Brian J. Biroscak

Abstract:

In the United States (US), “sex trafficking” is defined at the federal level in the Trafficking Victims Protection Act of 2000 as encompassing a number of processes such as recruitment, transportation, and provision of a person for the purpose of a commercial sex act. It involves the use of force, fraud, or coercion, or in which the person induced to perform such act has not attained 18 years of age. Accumulating evidence suggests that sex trafficking is exacerbated by social and environmental stressors (e.g., pandemics). Given that “provision” is a key part of the definition, “provisioning systems” may offer a useful lens through which to study sex trafficking dynamics. Provisioning systems are the social systems connecting individuals, small groups, entities, and embedded communities as they seek to satisfy their needs and wants for goods, services, experiences and ideas through value-based exchange in communities. This project presents a conceptual framework for understanding sex trafficking dynamics in the context of the COVID pandemic. The framework is developed as a system dynamics simulation model based on published evidence, social and behavioral science theory, and key informant interviews with stakeholders from the Protection, Prevention, Prosecution, and Partnership sectors in one US state. This “4 P Paradigm” has been described as fundamental to the US government’s anti-trafficking strategy. The present research question is: “How do sex trafficking systems (e.g., supply, demand and price) interact with other provisioning systems (e.g., networks of organizations that help sexually exploited persons) to influence trafficking over time vis-à-vis the COVID pandemic?” Semi-structured interviews with stakeholders (n = 19) were analyzed based on grounded theory and combined for computer simulation. The first step (Problem Definition) was completed by open coding video-recorded interviews, supplemented by a literature review. The model depicts provision of sex trafficking services for victims and survivors as declining in March 2020, coincidental with COVID, but eventually rebounding. The second modeling step (Dynamic Hypothesis Formulation) was completed by open- and axial coding of interview segments, as well as consulting peer-reviewed literature. Part of the hypothesized explanation for changes over time is that the sex trafficking system behaves somewhat like a commodities market, with each of the other subsystems exhibiting delayed responses but collectively keeping trafficking levels below what they would be otherwise. Next steps (Model Building & Testing) led to a ‘proof of concept’ model that can be used to conduct simulation experiments and test various action ideas, by taking model users outside the entire system and seeing it whole. If sex trafficking dynamics unfold as hypothesized, e.g., oscillated post-COVID, then one potential leverage point is to address the lack of information feedback loops between the actual occurrence and consequences of sex trafficking and those who seek to prevent its occurrence, prosecute the traffickers, protect the victims and survivors, and partner with the other anti-trafficking advocates. Implications for researchers, administrators, and other stakeholders are discussed.

Keywords: pandemics, provisioning systems, sex trafficking, system dynamics modeling

Procedia PDF Downloads 59
3905 A Clustering-Based Approach for Weblog Data Cleaning

Authors: Amine Ganibardi, Cherif Arab Ali

Abstract:

This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.

Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data

Procedia PDF Downloads 160
3904 Aerodynamic Interaction between Two Speed Skaters Measured in a Closed Wind Tunnel

Authors: Ola Elfmark, Lars M. Bardal, Luca Oggiano, H˚avard Myklebust

Abstract:

Team pursuit is a relatively new event in international long track speed skating. For a single speed skater the aerodynamic drag will account for up to 80% of the braking force, thus reducing the drag can greatly improve the performance. In a team pursuit the interactions between athletes in near proximity will also be essential, but is not well studied. In this study, systematic measurements of the aerodynamic drag, body posture and relative positioning of speed skaters have been performed in the low speed wind tunnel at the Norwegian University of Science and Technology, in order to investigate the aerodynamic interaction between two speed skaters. Drag measurements of static speed skaters drafting, leading, side-by-side, and dynamic drag measurements in a synchronized and unsynchronized movement at different distances, were performed. The projected frontal area was measured for all postures and movements and a blockage correction was performed, as the blockage ratio ranged from 5-15% in the different setups. The static drag measurements where performed on two test subjects in two different postures, a low posture and a high posture, and two different distances between the test subjects 1.5T and 3T where T being the length of the torso (T=0.63m). A drag reduction was observed for all distances and configurations, from 39% to 11.4%, for the drafting test subject. The drag of the leading test subject was only influenced at -1.5T, with the biggest drag reduction of 5.6%. An increase in drag was seen for all side-by-side measurements, the biggest increase was observed to be 25.7%, at the closest distance between the test subjects, and the lowest at 2.7% with ∼ 0.7 m between the test subjects. A clear aerodynamic interaction between the test subjects and their postures was observed for most measurements during static measurements, with results corresponding well to recent studies. For the dynamic measurements, the leading test subject had a drag reduction of 3% even at -3T. The drafting showed a drag reduction of 15% when being in a synchronized (sync) motion with the leading test subject at 4.5T. The maximal drag reduction for both the leading and the drafting test subject were observed when being as close as possible in sync, with a drag reduction of 8.5% and 25.7% respectively. This study emphasize the importance of keeping a synchronized movement by showing that the maximal gain for the leading and drafting dropped to 3.2% and 3.3% respectively when the skaters are in opposite phase. Individual differences in technique also appear to influence the drag of the other test subject.

Keywords: aerodynamic interaction, drag force, frontal area, speed skating

Procedia PDF Downloads 117
3903 Lotus Mechanism: Validation of Deployment Mechanism Using Structural and Dynamic Analysis

Authors: Parth Prajapati, A. R. Srinivas

Abstract:

The purpose of this paper is to validate the concept of the Lotus Mechanism using Computer Aided Engineering (CAE) tools considering the statics and dynamics through actual time dependence involving inertial forces acting on the mechanism joints. For a 1.2 m mirror made of hexagonal segments, with simple harnesses and three-point supports, the maximum diameter is 400 mm, minimum segment base thickness is 1.5 mm, and maximum rib height is considered as 12 mm. Manufacturing challenges are explored for the segments using manufacturing research and development approaches to enable use of large lightweight mirrors required for the future space system.

Keywords: dynamics, manufacturing, reflectors, segmentation, statics

Procedia PDF Downloads 353
3902 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems

Authors: Joachim F. Sartor

Abstract:

According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.

Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage

Procedia PDF Downloads 131
3901 Optics Meets Microfluidics for Highly Sensitive Force Sensing

Authors: Iliya Dimitrov Stoev, Benjamin Seelbinder, Elena Erben, Nicola Maghelli, Moritz Kreysing

Abstract:

Despite the revolutionizing impact of optical tweezers in materials science and cell biology up to the present date, trapping has so far extensively relied on specific material properties of the probe and local heating has limited applications related to investigating dynamic processes within living systems. To overcome these limitations while maintaining high sensitivity, here we present a new optofluidic approach that can be used to gently trap microscopic particles and measure femtoNewton forces in a contact-free manner and with thermally limited precision.

Keywords: optofluidics, force measurements, microrheology, FLUCS, thermoviscous flows

Procedia PDF Downloads 149
3900 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 111
3899 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 109
3898 The Physical and Physiological Profile of Professional Muay Thai Boxers

Authors: Lucy Horrobin, Rebecca Fores

Abstract:

Background: Muay Thai is an increasingly popular combat sport worldwide. Further academic research in the sport will contribute to its professional development. This research sought to produce normative data in relation to the physical and physiological characteristics of professional Muay Thai boxers, as, currently no such data exists. The ultimate aim being to inform appropriate training programs and to facilitate coaching. Methods: N = 9 professional, adult, male Muay Thai boxers were assessed for the following anthropometric, physical and physiological characteristics, using validated methods of assessment: body fat, hamstring flexibility, maximal dynamic upper body strength, lower limb peak power, upper body muscular endurance and aerobic capacity. Raw data scores were analysed for mean, range and SD and where applicable were expressed relative to body mass (BM). Results: Results showed similar characteristics to those found in other combat sports. Low percentages of body fat (mean±SD) 8.54 ± 1.16 allow for optimal power to weight ratios. Highly developed aerobic capacity (mean ±SD) 61.56 ± 5.13 ml.min.kg facilitate recovery and power maintenance throughout bouts. Lower limb peak power output values of (mean ± SD) 12.60 ± 2.09 W/kg indicate that Muay Thai boxers are amongst the most powerful of combat sport athletes. However, maximal dynamic upper body strength scores of (mean±SD) 1.14 kg/kg ± 0.18 were in only the 60th percentile of normative data for the general population and muscular endurance scores (mean±SD) 31.55 ± 11.95 and flexibility scores (mean±SD) 19.55 ± 11.89 cm expressed wide standard deviation. These results might suggest that these characteristics are insignificant in Muay Thai or under-developed, perhaps due to deficient training programs. Implications: This research provides the first normative data of physical and physiological characteristics of Muay Thai boxers. The findings of this study would aid trainers and coaches when designing effective evidence-based training programs. Furthermore, it provides a foundation for further research relating to physiology in Muay Thai. Areas of further study could be determining the physiological demands of a full rules bout and the effects of evidence-based training programs on performance.

Keywords: fitness testing, Muay Thai, physiology, strength and conditioning

Procedia PDF Downloads 203
3897 The Effect of Annual Weather and Sowing Date on Different Genotype of Maize (Zea mays L.) in Germination and Yield

Authors: Ákos Tótin

Abstract:

In crop production the most modern hybrids are available for us, therefore the yield and yield stability is determined by the agro-technology. The purpose of the experiment is to adapt the modern agrotechnology to the new type of hybrids. The long-term experiment was set up in 2015-2016 on chernozem soil in the Hajdúság (eastern Hungary). The plots were set up in 75 thousand ha-1 plant density. We examined some mainly use hybrids of Hungary. The conducted studies are: germination dynamic, growing dynamic and the effect of annual weather for the yield. We use three different sowing date as early, average and late, and measure how many plant germinated during the germination process. In the experiment, we observed the germination dynamics in 6 hybrid in 4 replication. In each replication, we counted the germinated plants in 2m long 2 row wide area. Data will be shown in the average of the 6 hybrid and 4 replication. Growing dynamics were measured from the 10cm (4-6 leaf) plant highness. We measured 10 plants’ height in two weeks replication. The yield was measured buy a special plot harvester - the Sampo Rosenlew 2010 – what measured the weight of the harvested plot and also took a sample from it. We determined the water content of the samples for the water release dynamics. After it, we calculated the yield (t/ha) of each plot at 14% of moisture content to compare them. We evaluated the data using Microsoft Excel 2015. The annual weather in each crop year define the maize germination dynamics because the amount of heat is determinative for the plants. In cooler crop year the weather is prolonged the germination. At the 2015 crop year the weather was cold in the beginning what prolonged the first sowing germination. But the second and third sowing germinated faster. In the 2016 crop year the weather was much favorable for plants so the first sowing germinated faster than in the previous year. After it the weather cooled down, therefore the second and third sowing germinated slower than the last year. The statistical data analysis program determined that there is a significant difference between the early and late sowing date growing dynamics. In 2015 the first sowing date had the highest amount of yield. The second biggest yield was in the average sowing time. The late sowing date has lowest amount of yield.

Keywords: germination, maize, sowing date, yield

Procedia PDF Downloads 215
3896 Revolutionary Solutions for Modeling and Visualization of Complex Software Systems

Authors: Jay Xiong, Li Lin

Abstract:

Existing software modeling and visualization approaches using UML are outdated, which are outcomes of reductionism and the superposition principle that the whole of a system is the sum of its parts, so that with them all tasks of software modeling and visualization are performed linearly, partially, and locally. This paper introduces revolutionary solutions for modeling and visualization of complex software systems, which make complex software systems much easy to understand, test, and maintain. The solutions are based on complexity science, offering holistic, automatic, dynamic, virtual, and executable approaches about thousand times more efficient than the traditional ones.

Keywords: complex systems, software maintenance, software modeling, software visualization

Procedia PDF Downloads 381
3895 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure

Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek

Abstract:

Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.

Keywords: health system outcomes, health reforms, health system assessment, health system evaluation

Procedia PDF Downloads 270
3894 Democracy as a Curve: A Study on How Democratization Impacts Economic Growth

Authors: Henrique Alpalhão

Abstract:

This paper attempts to model the widely studied relationship between a country's economic growth and its level of democracy, with an emphasis on possible non-linearities. We adopt the concept of 'political capital' as a measure of democracy, which is extremely uncommon in the literature and brings considerable advantages both in terms of dynamic considerations and plausibility. While the literature is not consensual on this matter, we obtain, via panel Arellano-Bond regression analysis on a database of more than 60 countries over 50 years, significant and robust results that indicate that the impact of democratization on economic growth varies according to the stage of democratic development each country is in.

Keywords: democracy, economic growth, political capital, political economy

Procedia PDF Downloads 301
3893 Serious Digital Video Game for Solving Algebraic Equations

Authors: Liliana O. Martínez, Juan E González, Manuel Ramírez-Aranda, Ana Cervantes-Herrera

Abstract:

A serious game category mobile application called Math Dominoes is presented. The main objective of this applications is to strengthen the teaching-learning process of solving algebraic equations and is based on the board game "Double 6" dominoes. Math Dominoes allows the practice of solving first, second-, and third-degree algebraic equations. This application is aimed to students who seek to strengthen their skills in solving algebraic equations in a dynamic, interactive, and fun way, to reduce the risk of failure in subsequent courses that require mastery of this algebraic tool.

Keywords: algebra, equations, dominoes, serious games

Procedia PDF Downloads 109
3892 Exploring Barriers to Social Innovation: Swedish Experiences from Nine Research Circles

Authors: Claes Gunnarsson, Karin Fröding, Nina Hasche

Abstract:

Innovation is a necessity for the evolution of societies and it is also a driving force in human life that leverages value creation among cross-sector participants in various network arrangements. Social innovations can be characterized as the creation and implementation of a new solution to a social problem, which is more effective and sustainable than existing solutions in terms of improvement of society’s conditions and in particular social inclusion processes. However, barriers exist which may restrict the potential of social innovations to live up to its promise as a societal welfare promoting driving force. The literature points at difficulties in tackling social problems primarily related to problem complexity, access to networks, and lack of financial muscles. Further research is warranted at detailed at detail clarification of these barriers, also connected to recognition of the interplay between institutional logics on the development of cross-sector collaborations in networks and the organizing processes to achieve innovation barrier break-through. There is also a need to further elaborate how obstacles that spur a difference between the actual and desired state of innovative value creating service systems can be overcome. The purpose of this paper is to illustrate barriers to social innovations, based on qualitative content analysis of 36 dialogue-based seminars (i.e. research circles) with nine Swedish focus groups including more than 90 individuals representing civil society organizations, private business, municipal offices, and politicians; and analyze patterns that reveal constituents of barriers to social innovations. The paper draws on central aspects of innovation barriers as discussed in the literature and analyze barriers basically related to internal/external and tangible/intangible characteristics. The findings of this study are that existing institutional structures highly influence the transformative potential of social innovations, as well as networking conditions in terms of building a competence-propelled strategy, which serves as an offspring for overcoming barriers of competence extension. Both theoretical and practical knowledge will contribute to how policy-makers and SI-practitioners can facilitate and support social innovation processes to be contextually adapted and implemented across areas and sectors.

Keywords: barriers, research circles, social innovation, service systems

Procedia PDF Downloads 237
3891 Walking in a Weather rather than a Climate: Critique on the Meta-Narrative of Buddhism in Early India

Authors: Yongjun Kim

Abstract:

Since the agreement on the historicity of historical Buddha in eastern India, the beginning, heyday and decline of Buddhism in Early India have been discussed in urbanization, commercialism and state formation context, in short, Weberian socio-politico frame. Recent Scholarship, notably in archaeology and anthropology, has proposed ‘re-materialization of Buddhism in Early India’ based on what Buddhist had actually done rather than what they should do according to canonical teachings or philosophies. But its historical narrations still remain with a domain of socio-politico meta-narrative which tends to unjustifiably dismiss the naturally existing heterogeneity and often chaotic dynamic of diverse agencies, landscape perceptions, localized traditions, etc. An author will argue the multiplicity of theoretical standpoints for the reconstruction on the Buddhism in Early India. For this, at first, the diverse agencies, localized traditions, landscape patterns of Buddhist communities and monasteries in Trans-Himalayan regions; focusing Zanskar Valley and Spiti Valley in India will be illustrated based on an author’s field work. And then an author will discuss this anthropological landscape analysis is better appropriated with textual and archaeological evidences on the tension between urban monastic and forest Buddhism, the phenomena of sacred landscape, cemetery, garden, natural cave along with socio-economic landscape, the demographic heterogeneity in Early India. Finally, it will be attempted to compare between anthropological landscape of present Trans-Himalayan and archaeological one of ancient Western India. The study of Buddhism in Early India has hardly been discussed through multivalent theoretical archaeology and anthropology of religion, thus traditional and recent scholarship have produced historical meta-narrative though heterogeneous among them. The multidisciplinary approaches of textual critics, archaeology and anthropology will surely help to deconstruct the grand and all-encompassing historical description on Buddhism in Early India and then to reconstruct the localized, behavioral and multivalent narratives. This paper expects to highlight the importance of lesser-studied Buddhist archaeological sites and the dynamic views on religious landscape in Early India with a help of critical anthropology of religion.

Keywords: analogy by living traditions, Buddhism in Early India, landscape analysis, meta-narrative

Procedia PDF Downloads 317
3890 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 58
3889 Monitoring and Prediction of Intra-Crosstalk in All-Optical Network

Authors: Ahmed Jedidi, Mesfer Mohammed Alshamrani, Alwi Mohammad A. Bamhdi

Abstract:

Optical performance monitoring and optical network management are essential in building a reliable, high-capacity, and service-differentiation enabled all-optical network. One of the serious problems in this network is the fact that optical crosstalk is additive, and thus the aggregate effect of crosstalk over a whole AON may be more nefarious than a single point of crosstalk. As results, we note a huge degradation of the Quality of Service (QoS) in our network. For that, it is necessary to identify and monitor the impairments in whole network. In this way, this paper presents new system to identify and monitor crosstalk in AONs in real-time fashion. particular, it proposes a new technique to manage intra-crosstalk in objective to relax QoS of the network.

Keywords: all-optical networks, optical crosstalk, optical cross-connect, crosstalk, monitoring crosstalk

Procedia PDF Downloads 434
3888 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 243
3887 Emerging Technologies for Learning: In Need of a Pro-Active Educational Strategy

Authors: Pieter De Vries, Renate Klaassen, Maria Ioannides

Abstract:

This paper is about an explorative research into the use of emerging technologies for teaching and learning in higher engineering education. The assumption is that these technologies and applications, which are not yet widely adopted, will help to improve education and as such actively work on the ability to better deal with the mismatch of skills bothering our industries. Technologies such as 3D printing, the Internet of Things, Virtual Reality, and others, are in a dynamic state of development which makes it difficult to grasp the value for education. Also, the instruments in current educational research seem not appropriate to assess the value of such technologies. This explorative research aims to foster an approach to better deal with this new complexity. The need to find out is urgent, because these technologies will be dominantly present in the near future in all aspects of life, including education. The methodology used in this research comprised an inventory of emerging technologies and tools that potentially give way to innovation and are used or about to be used in technical universities. The inventory was based on both a literature review and a review of reports and web resources like blogs and others and included a series of interviews with stakeholders in engineering education and at representative industries. In addition, a number of small experiments were executed with the aim to analyze the requirements for the use of in this case Virtual Reality and the Internet of Things to better understanding the opportunities and limitations in the day-today learning environment. The major findings indicate that it is rather difficult to decide about the value of these technologies for education due to the dynamic state of change and therefor unpredictability and the lack of a coherent policy at the institutions. Most decisions are being made by teachers on an individual basis, who in their micro-environment are not equipped to select, test and ultimately decide about the use of these technologies. Most experiences are being made in the industry knowing that the skills to handle these technologies are in high demand. The industry though is worried about the inclination and the capability of education to help bridge the skills gap related to the emergence of new technologies. Due to the complexity, the diversity, the speed of development and the decay, education is challenged to develop an approach that can make these technologies work in an integrated fashion. For education to fully profit from the opportunities, these technologies offer it is eminent to develop a pro-active strategy and a sustainable approach to frame the emerging technologies development.

Keywords: emerging technologies, internet of things, pro-active strategy, virtual reality

Procedia PDF Downloads 167
3886 Smart Irrigation Systems and Website: Based Platform for Farmer Welfare

Authors: Anusha Jain, Santosh Vishwanathan, Praveen K. Gupta, Shwetha S., Kavitha S. N.

Abstract:

Agriculture has a major impact on the Indian economy, with the highest employment ratio than any sector of the country. Currently, most of the traditional agricultural practices and farming methods are manual, which results in farmers not realizing their maximum productivity often due to increasing in labour cost, inefficient use of water sources leading to wastage of water, inadequate soil moisture content, subsequently leading to food insecurity of the country. This research paper aims to solve this problem by developing a full-fledged web application-based platform that has the capacity to associate itself with a Microcontroller-based Automated Irrigation System which schedules the irrigation of crops based on real-time soil moisture content employing soil moisture sensors centric to the crop’s requirements using WSN (Wireless Sensor Networks) and M2M (Machine To Machine Communication) concepts, thus optimizing the use of the available limited water resource, thereby maximizing the crop yield. This robust automated irrigation system provides end-to-end automation of Irrigation of crops at any circumstances such as droughts, irregular rainfall patterns, extreme weather conditions, etc. This platform will also be capable of achieving a nationwide united farming community and ensuring the welfare of farmers. This platform is designed to equip farmers with prerequisite knowledge on tech and the latest farming practices in general. In order to achieve this, the MailChimp mailing service is used through which interested farmers/individuals' email id will be recorded and curated articles on innovations in the world of agriculture will be provided to the farmers via e-mail. In this proposed system, service is enabled on the platform where nearby crop vendors will be able to enter their pickup locations, accepted prices and other relevant information. This will enable farmers to choose their vendors wisely. Along with this, we have created a blogging service that will enable farmers and agricultural enthusiasts to share experiences, helpful knowledge, hardships, etc., with the entire farming community. These are some of the many features that the platform has to offer.

Keywords: WSN (wireless sensor networks), M2M (M/C to M/C communication), automation, irrigation system, sustainability, SAAS (software as a service), soil moisture sensor

Procedia PDF Downloads 107