Search results for: elliptic curve digital signature algorithm
776 Assessing the Legacy Effects of Wildfire on Eucalypt Canopy Structure of South Eastern Australia
Authors: Yogendra K. Karna, Lauren T. Bennett
Abstract:
Fire-tolerant eucalypt forests are one of the major forest ecosystems of south-eastern Australia and thought to be highly resistant to frequent high severity wildfires. However, the impact of different severity wildfires on the canopy structure of fire-tolerant forest type is under-studied, and there are significant knowledge gaps in relation to the assessment of tree and stand level canopy structural dynamics and recovery after fire. Assessment of canopy structure is a complex task involving accurate measurements of the horizontal and vertical arrangement of the canopy in space and time. This study examined the utility of multitemporal, small-footprint lidar data to describe the changes in the horizontal and vertical canopy structure of fire-tolerant eucalypt forests seven years after wildfire of different severities from the tree to stand level. Extensive ground measurements were carried out in four severity classes to describe and validate canopy cover and height metrics as they change after wildfire. Several metrics such as crown height and width, crown base height and clumpiness of crown were assessed at tree and stand level using several individual tree top detection and measurement algorithm. Persistent effects of high severity fire 8 years after both on tree crowns and stand canopy were observed. High severity fire increased the crown depth but decreased the crown projective cover leading to more open canopy.Keywords: canopy gaps, canopy structure, crown architecture, crown projective cover, multi-temporal lidar, wildfire severity
Procedia PDF Downloads 175775 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain
Authors: Zachary Blanks, Solomon Sonya
Abstract:
Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection
Procedia PDF Downloads 292774 An Energy-Balanced Clustering Method on Wireless Sensor Networks
Authors: Yu-Ting Tsai, Chiun-Chieh Hsu, Yu-Chun Chu
Abstract:
In recent years, due to the development of wireless network technology, many researchers have devoted to the study of wireless sensor networks. The applications of wireless sensor network mainly use the sensor nodes to collect the required information, and send the information back to the users. Since the sensed area is difficult to reach, there are many restrictions on the design of the sensor nodes, where the most important restriction is the limited energy of sensor nodes. Because of the limited energy, researchers proposed a number of ways to reduce energy consumption and balance the load of sensor nodes in order to increase the network lifetime. In this paper, we proposed the Energy-Balanced Clustering method with Auxiliary Members on Wireless Sensor Networks(EBCAM)based on the cluster routing. The main purpose is to balance the energy consumption on the sensed area and average the distribution of dead nodes in order to avoid excessive energy consumption because of the increasing in transmission distance. In addition, we use the residual energy and average energy consumption of the nodes within the cluster to choose the cluster heads, use the multi hop transmission method to deliver the data, and dynamically adjust the transmission radius according to the load conditions. Finally, we use the auxiliary cluster members to change the delivering path according to the residual energy of the cluster head in order to its load. Finally, we compare the proposed method with the related algorithms via simulated experiments and then analyze the results. It reveals that the proposed method outperforms other algorithms in the numbers of used rounds and the average energy consumption.Keywords: auxiliary nodes, cluster, load balance, routing algorithm, wireless sensor network
Procedia PDF Downloads 274773 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data
Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao
Abstract:
Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive
Procedia PDF Downloads 174772 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing
Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang
Abstract:
The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.Keywords: additive manufacturing, generative design, robot, sustainability
Procedia PDF Downloads 131771 Methodology and Credibility of Unmanned Aerial Vehicle-Based Cadastral Mapping
Authors: Ajibola Isola, Shattri Mansor, Ojogbane Sani, Olugbemi Tope
Abstract:
The cadastral map is the rationale behind city management planning and development. For years, cadastral maps have been produced by ground and photogrammetry platforms. Recent evolution in photogrammetry and remote sensing sensors ignites the use of Unmanned Aerial Vehicle systems (UAVs) for cadastral mapping. Despite the time-saving and multi-dimensional cost-effectiveness of the UAV platform, issues related to cadastral map accuracy are a hindrance to the wide applicability of UAVs' cadastral mapping. This study aims to present an approach leading to the generation and assessing the credibility of UAV cadastral mapping. Different sets of Red, Green, and Blue (RGB) photos were obtained from the Tarot 680-hexacopter UAV platform flown over the Universiti Putra Malaysia campus sports complex at an altitude range of 70 m, 100 m, and 250. Before flying the UAV, twenty-eight ground control points were evenly established in the study area with a real-time kinematic differential global positioning system. The second phase of the study utilizes an image-matching algorithm for photos alignment wherein camera calibration parameters and ten of the established ground control points were used for estimating the inner, relative, and absolute orientations of the photos. The resulting orthoimages are exported to ArcGIS software for digitization. Visual, tabular, and graphical assessments of the resulting cadastral maps showed a different level of accuracy. The results of the study show a gradual approach for generating UAV cadastral mapping and that the cadastral map acquired at 70 m altitude produced better results.Keywords: aerial mapping, orthomosaic, cadastral map, flying altitude, image processing
Procedia PDF Downloads 82770 Representation of Emotions and Characters in Turkish and Indian Series
Authors: Lienjang Zeite
Abstract:
Over the past few years, Turkish and Indian series have been distributed worldwide to countless households and have found ardent followers across different age group. The series have captured numerous hearts. Turkish and Indian series have become not only one of the best means of entertainment and relaxation but also a platform to learn and appreciate shared emotions and social messages. The popularity of the series has created a kind of interest in representing human emotions and stories like never before. The demands for such series have totally shifted the entertainment industry at a new level. The interest and vibe created by the series have had impacts on various departments spanning from technology to the fashion industry and it has also become the bridge to connect viewers across the globe. The series have amassed avid admirers who find solace in the beautiful visual representations of human relationships whether it is of lovers, family or friendship. The influence of Turkish and Indian series in many parts of the world has created a cultural phenomenon that has taken viewers beyond cultural and language differences. From China to Latin America, Arab countries and the Caucasus region, the series have been accepted and loved by millions of viewers. It has captivated audiences ranging from grandmothers to teenagers. Issues like language barrier are easily solved by means of translation or dubbing making it easier to understand and enjoy the series. Turkey and India are two different countries with their own unique culture and traditions. Both the countries are exporters of series in large scale. The series function as a platform to reveal the plots and shed lights on characters of all kinds. Both the countries produce series that are more or less similar in nature. However, there are also certain issues that are shown in different ways and light. The paper will discuss how emotions are represented in Turkish and Indian series. It will also discuss the ways the series have impacted the art of representing emotions and characters in the digital era. The representation of culture through Turkish and Indian series will be explored as well. The paper will also locate the issue of gender roles and how relationships are forged or abandoned in the series. The issue of character formation and importance of moral factors will be discussed. It will also examine the formula and ingredients of turning human emotions and characters into a much loved series.Keywords: characters, cultural phenomenon, emotions, Turkish and Indian series
Procedia PDF Downloads 137769 Quo Vadis, European Football: An Analysis of the Impact of Over-The-Top Services in the Sports Rights Market
Authors: Farangiz Davranbekova
Abstract:
Subject: The study explores the impact of Over-the-Top services in the sports rights market, focusing on football games. This impact is analysed in the big five European football markets. The research entails how the pay-TV market is combating the disruptors' entry, how the fans are adjusting to these changes and how leagues and football clubs are orienting in the transitional period of more choice. Aims and methods: The research aims to offer a general overview of the impact of OTT players in the football rights market. A theoretical framework of Jenkins’ five layers of convergence is implemented to analyse the transition the sports rights market is witnessing from various angles. The empirical analysis consists of secondary research data as and seven expert interviews from three different clusters. The findings are bound by the combination of the two methods offering general statements. Findings: The combined secondary data as well as expert interviews, conducted on five layers of convergence found: 1. Technological convergence presents that football content is accessible through various devices with innovative digital features, unlike the traditional TV set box. 2. Social convergence demonstrates that football fans multitask using various devices on social media when watching the games. These activities are complementary to traditional TV viewing. 3. Cultural convergence points that football fans have a new layer of fan engagement with leagues, clubs and other fans using social media. Additionally, production and consumption lines are blurred. 4. Economic convergence finds that content distribution is diversifying and/or eroding. Consumers now have more choices, albeit this can be harmful to them. Entry barriers are decreased, and bigger clubs feel more powerful. 5. Global convergence shows that football fans are engaging with not only local fans but with fans around the world that social media sites enable. Recommendation: A study on smaller markets such as Belgium or the Netherlands would benefit the study on the impact of OTT. Additionally, examination of other sports will shed light on this matter. Lastly, once the direct-to-consumer model is fully taken off in Europe, it will be of importance to examine the impact of such transformation in the market.Keywords: sports rights, OTT, pay TV, football
Procedia PDF Downloads 156768 Geotechnical Challenges for the Use of Sand-sludge Mixtures in Covers for the Rehabilitation of Acid-Generating Mine Sites
Authors: Mamert Mbonimpa, Ousseynou Kanteye, Élysée Tshibangu Ngabu, Rachid Amrou, Abdelkabir Maqsoud, Tikou Belem
Abstract:
The management of mine wastes (waste rocks and tailings) containing sulphide minerals such as pyrite and pyrrhotite represents the main environmental challenge for the mining industry. Indeed, acid mine drainage (AMD) can be generated when these wastes are exposed to water and air. AMD is characterized by low pH and high concentrations of heavy metals, which are toxic to plants, animals, and humans. It affects the quality of the ecosystem through water and soil pollution. Different techniques involving soil materials can be used to control AMD generation, including impermeable covers (compacted clays) and oxygen barriers. The latter group includes covers with capillary barrier effects (CCBE), a multilayered cover that include the moisture retention layer playing the role of an oxygen barrier. Once AMD is produced at a mine site, it must be treated so that the final effluent at the mine site complies with regulations and can be discharged into the environment. Active neutralization with lime is one of the treatment methods used. This treatment produces sludge that is usually stored in sedimentation ponds. Other sludge management alternatives have been examined in recent years, including sludge co-disposal with tailings or waste rocks, disposal in underground mine excavations, and storage in technical landfill sites. Considering the ability of AMD neutralization sludge to maintain an alkaline to neutral pH for decades or even centuries, due to the excess alkalinity induced by residual lime within the sludge, valorization of sludge in specific applications could be an interesting management option. If done efficiently, the reuse of sludge could free up storage ponds and thus reduce the environmental impact. It should be noted that mixtures of sludge and soils could potentially constitute usable materials in CCBE for the rehabilitation of acid-generating mine sites, while sludge alone is not suitable for this purpose. The high sludge water content (up to 300%), even after sedimentation, can, however, constitute a geotechnical challenge. Adding lime to the mixtures can reduce the water content and improve the geotechnical properties. The objective of this paper is to investigate the impact of the sludge content (30, 40 and 50%) in sand-sludge mixtures (SSM) on their hydrogeotechnical properties (compaction, shrinkage behaviour, saturated hydraulic conductivity, and water retention curve). The impact of lime addition (dosages from 2% to 6%) on the moisture content, dry density after compaction and saturated hydraulic conductivity of SSM was also investigated. Results showed that sludge adding to sand significantly improves the saturated hydraulic conductivity and water retention capacity, but the shrinkage increased with sludge content. The dry density after compaction of lime-treated SSM increases with the lime dosage but remains lower than the optimal dry density of the untreated mixtures. The saturated hydraulic conductivity of lime-treated SSM after 24 hours of cure decreases by 3 orders of magnitude. Considering the hydrogeotechnical properties obtained with these mixtures, it would be possible to design CCBE whose moisture retention layer is made of SSM. Physical laboratory models confirmed the performance of such CCBE.Keywords: mine waste, AMD neutralization sludge, sand-sludge mixture, hydrogeotechnical properties, mine site reclamation, CCBE
Procedia PDF Downloads 53767 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body
Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi
Abstract:
The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.Keywords: Accu-Check, diabetes, neural network, pattern recognition
Procedia PDF Downloads 147766 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 179765 Role of Imaging in Predicting the Receptor Positivity Status in Lung Adenocarcinoma: A Chapter in Radiogenomics
Authors: Sonal Sethi, Mukesh Yadav, Abhimanyu Gupta
Abstract:
The upcoming field of radiogenomics has the potential to upgrade the role of imaging in lung cancer management by noninvasive characterization of tumor histology and genetic microenvironment. Receptor positivity like epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) genotyping are critical in lung adenocarcinoma for treatment. As conventional identification of receptor positivity is an invasive procedure, we analyzed the features on non-invasive computed tomography (CT), which predicts the receptor positivity in lung adenocarcinoma. Retrospectively, we did a comprehensive study from 77 proven lung adenocarcinoma patients with CT images, EGFR and ALK receptor genotyping, and clinical information. Total 22/77 patients were receptor-positive (15 had only EGFR mutation, 6 had ALK mutation, and 1 had both EGFR and ALK mutation). Various morphological characteristics and metastatic distribution on CT were analyzed along with the clinical information. Univariate and multivariable logistic regression analyses were used. On multivariable logistic regression analysis, we found spiculated margin, lymphangitic spread, air bronchogram, pleural effusion, and distant metastasis had a significant predictive value for receptor mutation status. On univariate analysis, air bronchogram and pleural effusion had significant individual predictive value. Conclusions: Receptor positive lung cancer has characteristic imaging features compared with nonreceptor positive lung adenocarcinoma. Since CT is routinely used in lung cancer diagnosis, we can predict the receptor positivity by a noninvasive technique and would follow a more aggressive algorithm for evaluation of distant metastases as well as for the treatment.Keywords: lung cancer, multidisciplinary cancer care, oncologic imaging, radiobiology
Procedia PDF Downloads 136764 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms
Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager
Abstract:
This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties
Procedia PDF Downloads 54763 O-LEACH: The Problem of Orphan Nodes in the LEACH of Routing Protocol for Wireless Sensor Networks
Authors: Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi
Abstract:
The optimum use of coverage in wireless sensor networks (WSNs) is very important. LEACH protocol called Low Energy Adaptive Clustering Hierarchy, presents a hierarchical clustering algorithm for wireless sensor networks. LEACH is a protocol that allows the formation of distributed cluster. In each cluster, LEACH randomly selects some sensor nodes called cluster heads (CHs). The selection of CHs is made with a probabilistic calculation. It is supposed that each non-CH node joins a cluster and becomes a cluster member. Nevertheless, some CHs can be concentrated in a specific part of the network. Thus, several sensor nodes cannot reach any CH. to solve this problem. We created an O-LEACH Orphan nodes protocol, its role is to reduce the sensor nodes which do not belong the cluster. The cluster member called Gateway receives messages from neighboring orphan nodes. The gateway informs CH having the neighboring nodes that not belong to any group. However, Gateway called (CH') attaches the orphaned nodes to the cluster and then collected the data. O-Leach enables the formation of a new method of cluster, leads to a long life and minimal energy consumption. Orphan nodes possess enough energy and seeks to be covered by the network. The principal novel contribution of the proposed work is O-LEACH protocol which provides coverage of the whole network with a minimum number of orphaned nodes and has a very high connectivity rates.As a result, the WSN application receives data from the entire network including orphan nodes. The proper functioning of the Application requires, therefore, management of intelligent resources present within each the network sensor. The simulation results show that O-LEACH performs better than LEACH in terms of coverage, connectivity rate, energy and scalability.Keywords: WSNs; routing; LEACH; O-LEACH; Orphan nodes; sub-cluster; gateway; CH’
Procedia PDF Downloads 371762 Materials and Techniques of Anonymous Egyptian Polychrome Cartonnage Mummy Mask: A Multiple Analytical Study
Authors: Hanaa A. Al-Gaoudi, Hassan Ebeid
Abstract:
The research investigates the materials and processes used in the manufacturing of an Egyptian polychrome cartonnage mummy mask with the aim of dating this object and establishing trade patterns of certain materials that were used and available at the time of ancient Egypt. This anonymous-source object was held in the basement storage of the Egyptian Museum in Cairo (EMC) and has never been on display. Furthermore, there is no information available regarding its owner, provenance, date, and even the time of its possession by the museum. Moreover, the object is in a very poor condition where almost two-thirds of the mask was bent and has never received any previous conservation treatment. This research has utilized well-established multi-analytical methods to identify the considerable diversity of materials that have been used in the manufacturing of this object. These methods include Computed Tomography Scan (CT scan) to acquire detailed pictures of the inside physical structure and condition of the bended layers. Dino-Lite portable digital microscope, scanning electron microscopy with energy dispersive X-ray spectrometer (SEM-EDX), and the non-invasive imaging technique of multispectral imaging (MSI) to obtain information about the physical characteristics and condition of the painted layers and to examine the microstructure of the materials. Portable XRF Spectrometer (PXRF) and X-Ray powder diffraction (XRD) to identify mineral phases and the bulk element composition in the gilded layer, ground, and pigments; Fourier-transform infrared (FTIR) to identify organic compounds and their molecular characterization; accelerator mass spectrometry (AMS 14C) to date the object. Preliminary results suggest that there are no human remains inside the object, and the textile support is linen fibres with tabby weave 1/1 and these fibres are in a very bad condition. Several pigments have been identified, such as Egyptian blue, Magnetite, Egyptian green frit, Hematite, Calcite, and Cinnabar; moreover, the gilded layers are pure gold and the binding media in the pigments is Arabic gum and animal glue in the textile support layer.Keywords: analytical methods, Egyptian museum, mummy mask, pigments, textile
Procedia PDF Downloads 125761 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh
Authors: A. A. Sadia, A. Emdad, E. Hossain
Abstract:
The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application
Procedia PDF Downloads 68760 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization
Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur
Abstract:
In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization
Procedia PDF Downloads 445759 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively
Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus
Abstract:
Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment
Procedia PDF Downloads 94758 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre
Authors: L. Nathaniel-Wurie
Abstract:
The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.Keywords: airway, cricoid, medical education, sellick
Procedia PDF Downloads 79757 Rare Diagnosis in Emergency Room: Moyamoya Disease
Authors: Ecem Deniz Kırkpantur, Ozge Ecmel Onur, Tuba Cimilli Ozturk, Ebru Unal Akoglu
Abstract:
Moyamoya disease is a unique chronic progressive cerebrovascular disease characterized by bilateral stenosis or occlusion of the arteries around the circle of Willis with prominent arterial collateral circulation. The occurrence of Moyamoya disease is related to immune, genetic and other factors. There is no curative treatment for Moyamoya disease. Secondary prevention for patients with symptomatic Moyamoya disease is largely centered on surgical revascularization techniques. We present here a 62-year old male presented with headache and vision loss for 2 days. He was previously diagnosed with hypertension and glaucoma. On physical examination, left eye movements were restricted medially, both eyes were hyperemic and their movements were painful. Other neurological and physical examination were normal. His vital signs and laboratory results were within normal limits. Computed tomography (CT) showed dilated vascular structures around both lateral ventricles and atherosclerotic changes inside the walls of internal carotid artery (ICA). Magnetic resonance imaging (MRI) and angiography (MRA) revealed dilated venous vascular structures around lateral ventricles and hyper-intense gliosis in periventricular white matter. Ischemic gliosis around the lateral ventricles were present in the Digital Subtracted Angiography (DSA). After the neurology, ophthalmology and neurosurgery consultation, the patient was diagnosed with Moyamoya disease, pulse steroid therapy was started for vision loss, and super-selective DSA was planned for further investigation. Moyamoya disease is a rare condition, but it can be an important cause of stroke in both children and adults. It generally affects anterior circulation, but posterior cerebral circulation may also be affected, as well. In the differential diagnosis of acute vision loss, occipital stroke related to Moyamoya disease should be considered. Direct and indirect surgical revascularization surgeries may be used to effectively revascularize affected brain areas, and have been shown to reduce risk of stroke.Keywords: headache, Moyamoya disease, stroke, visual loss
Procedia PDF Downloads 267756 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel
Authors: F. M. Pisano, M. Ciminello
Abstract:
Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.Keywords: interactive dashboards, optical fibers, structural health monitoring, visual analytics
Procedia PDF Downloads 124755 Remote Sensing Reversion of Water Depths and Water Management for Waterbird Habitats: A Case Study on the Stopover Site of Siberian Cranes at Momoge, China
Authors: Chunyue Liu, Hongxing Jiang
Abstract:
Traditional water depth survey of wetland habitats used by waterbirds needs intensive labor, time and money. The optical remote sensing image relies on passive multispectral scanner data has been widely employed to study estimate water depth. This paper presents an innovative method for developing the water depth model based on the characteristics of visible and thermal infrared spectra of Landsat ETM+ image, combing with 441 field water depth data at Etoupao shallow wetland. The wetland is located at Momoge National Nature Reserve of Northeast China, where the largest stopover habitat along the eastern flyway of globally, critically-endangered Siberian Cranes are. The cranes mainly feed on the tubers of emergent aquatic plants such as Scirpus planiculmis and S. nipponicus. The effective water control is a critical step for maintaining the production of tubers and food availability for this crane. The model employing multi-band approach can effectively simulate water depth for this shallow wetland. The model parameters of NDVI and GREEN indicated the vegetation growth and coverage affecting the reflectance from water column change are uneven. Combining with the field-observed water level at the same date of image acquisition, the digital elevation model (DEM) for the underwater terrain was generated. The wetland area and water volume of different water levels were then calculated from the DEM using the function of Area and Volume Statistics under the 3D Analyst of ArcGIS 10.0. The findings provide good references to effectively monitor changes in water level and water demand, develop practical plan for water level regulation and water management, and to create best foraging habitats for the cranes. The methods here can be adopted for the bottom topography simulation and water management in waterbirds’ habitats, especially in the shallow wetlands.Keywords: remote sensing, water depth reversion, shallow wetland habitat management, siberian crane
Procedia PDF Downloads 252754 Problems concerning Formation of Institutional Framework for Electronic Democracy in Georgia
Authors: Giorgi Katamadze
Abstract:
Open public service and accountability towards citizens is an important feature of democratic state based on rule of law. Effective use of electronic resources simplifies bureaucratic procedures, makes direct communications, helps exchange information, ensures government’s openness and in general helps develop electronic/digital democracy. Development of electronic democracy should be a strategic dimension of Georgian governance. Formation of electronic democracy, its functional improvement should become an important dimension of the state’s information policy. Electronic democracy is based on electronic governance and implies modern information and communication systems, their adaptation to universal standards. E-democracy needs involvement of governments, voters, political parties and social groups in an electronic form. In the last years the process of interaction between the citizen and the state becomes simpler. This process is achieved by the use of modern technological systems which gives to a citizen a possibility to use different public services online. For example, the website my.gov.ge makes interaction between the citizen, business and the state more simple, comfortable and secure. A higher standard of accountability and interaction is being established. Electronic democracy brings new forms of interactions between the state and the citizen: e-engagement – participation of society in state politics via electronic systems; e-consultation – electronic interaction among public officials, citizens and interested groups; e-controllership – electronic rule and control of public expenses and service. Public transparency is one of the milestones of electronic democracy as well as representative democracy as only on mutual trust and accountability can democracy be established. In Georgia, institutional changes concerning establishment and development of electronic democracy are not enough. Effective planning and implementation of a comprehensive and multi component e-democracy program (central, regional, local levels) requires telecommunication systems, institutional (public service, competencies, logical system) and informational (relevant conditions for public involvement) support. Therefore, a systematic project of formation of electronic governance should be developed which will include central, regional, municipal levels and certain aspects of development of instrumental basis for electronic governance.Keywords: e-democracy, e-governance, e-services, information technology, public administration
Procedia PDF Downloads 337753 A Corpus-Linguistic Analysis of Online Iranian News Coverage on Syrian Revolution
Authors: Amaal Ali Al-Gamde
Abstract:
The Syrian revolution is a major issue in the Middle East, which draws in world powers and receives a great focus in international mass media since 2011. The heavy global reliance on cyber news and digital sources plays a key role in conveying a sense of bias to a wide range of online readers. Thus, based on the assumption that media discourse possesses ideological implications, this study investigates the representation of Syrian revolution in online media. The paper explores the discursive constructions of anti and pro-government powers in Syrian revolution in 1000,000-word corpus of Fars online reports (an Iranian news agency), issued between 2013 and 2015. Taking a corpus assisted discourse analysis approach, the analysis investigates three types of lexicosemantic relations, the semantic macrostructures within which the two social actors are framed, the lexical collocations characterizing the news discourse and the discourse prosodies they tell about the two sides of the conflict. The study utilizes computer-based approaches, sketch engine and AntConc software to minimize the bias of the subjective analysis. The analysis moves from the insights of lexical frequencies and keyness scores to examine themes and the collocational patterns. The findings reveal the Fars agency’s ideological mode of representations in reporting events of Syrian revolution in two ways. The first is by stereotyping the opposition groups under the umbrella of terrorism, using words such as (law breakers, foreign-backed groups, militant groups, terrorists) to legitimize the atrocities of security forces against protesters and enhance horror among civilians. The second is through emphasizing the power of the government and depicting it as the defender of the Arab land by foregrounding the discourse of international conspiracy against Syria. The paper concludes discussing the potential importance of triangulating corpus linguistic tools with critical discourse analysis to elucidate more about discourses and reality.Keywords: discourse prosody, ideology, keyness, semantic macrostructure
Procedia PDF Downloads 131752 In Silico Study of Antiviral Drugs Against Three Important Proteins of Sars-Cov-2 Using Molecular Docking Method
Authors: Alireza Jalalvand, Maryam Saleh, Somayeh Behjat Khatouni, Zahra Bahri Najafi, Foroozan Fatahinia, Narges Ismailzadeh, Behrokh Farahmand
Abstract:
Object: In the last two decades, the recent outbreak of Coronavirus (SARS-CoV-2) imposed a global pandemic in the world. Despite the increasing prevalence of the disease, there are no effective drugs to treat it. A suitable and rapid way to afford an effective drug and treat the global pandemic is a computational drug study. This study used molecular docking methods to examine the potential inhibition of over 50 antiviral drugs against three fundamental proteins of SARS-CoV-2. METHODS: Through a literature review, three important proteins (a key protease, RNA-dependent RNA polymerase (RdRp), and spike) were selected as drug targets. Three-dimensional (3D) structures of protease, spike, and RdRP proteins were obtained from the Protein Data Bank. Protein had minimal energy. Over 50 antiviral drugs were considered candidates for protein inhibition and their 3D structures were obtained from drug banks. The Autodock 4.2 software was used to define the molecular docking settings and run the algorithm. RESULTS: Five drugs, including indinavir, lopinavir, saquinavir, nelfinavir, and remdesivir, exhibited the highest inhibitory potency against all three proteins based on the binding energies and drug binding positions deduced from docking and hydrogen-bonding analysis. Conclusions: According to the results, among the drugs mentioned, saquinavir and lopinavir showed the highest inhibitory potency against all three proteins compared to other drugs. It may enter laboratory phase studies as a dual-drug treatment to inhibit SARS-CoV-2.Keywords: covid-19, drug repositioning, molecular docking, lopinavir, saquinavir
Procedia PDF Downloads 88751 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 406750 Plastic Deformation Behavior of a Pre-Bored Pile Filler Material Due to Lateral Cyclic Loading in Sandy Soil
Authors: A. Y. Purnama, N. Yasufuku
Abstract:
The bridge structure is a building that has to be maintained, especially for the elastomeric bearing. The girder of the bridge needs to be lifted upward to maintain this elastomeric bearing, that needs high cost. Nowadays, integral abutment bridges are becoming popular. The integral abutment bridge is less costly because the elastomeric bearings are eliminated, which reduces the construction cost and maintenance costs. However, when this elastomeric bearing removed, the girder movement due to environmental thermal forces directly support by pile foundation, and it needs to be considered in the design. In case of pile foundation in a stiff soil, in the top area of the pile cannot move freely due to the fixed condition by soil stiffness. Pre-bored pile system can be used to increase the flexibility of pile foundation using a pre-bored hole that filled with elastic materials, but the behavior of soil-pile interaction and soil response due to this system is still rarely explained. In this paper, an experimental study using small-scale laboratory model test conducted in a half size model. Single flexible pile model embedded in sandy soil with the pre-bored ring, which filled with the filler material. The testing box made from an acrylic glass panel as observation area of the pile shaft to monitor the displacement of the pile during the lateral loading. The failure behavior of the soil inside the pre-bored ring and around the pile shaft was investigated to determine the point of pile rotation and the movement of this point due to the pre-bored ring system along the pile shaft. Digital images were used to capture the deformations of the soil and pile foundation during the loading from the acrylic glass on the side of the testing box. The results were presented in the form of lateral load resistance charts against the pile shaft displacement. The failure pattern result also established due to the cyclic lateral loading. The movement of the rotational point was measured due to the pre-bored system filled with appropriate filler material. Based on the findings, design considerations for pre-bored pile system due to cyclic lateral loading can be introduced.Keywords: failure behavior, pre-bored pile system, cyclic lateral loading, sandy soil
Procedia PDF Downloads 233749 Gamification Teacher Professional Development: Engaging Language Learners in STEMS through Game-Based Learning
Authors: Karen Guerrero
Abstract:
Kindergarten-12th grade teachers engaged in teacher professional development (PD) on game-based learning techniques and strategies to support teaching STEMSS (STEM + Social Studies with an emphasis on geography across the curriculum) to language learners. Ten effective strategies have supported teaching content and language in tandem. To provide exiting teacher PD on summer and spring breaks, gamification has integrated these strategies to engage linguistically diverse student populations to provide informal language practice while students engage in the content. Teachers brought a STEMSS lesson to the PD, engaged in a wide variety of games (dice, cards, board, physical, digital, etc.), critiqued the games based on gaming elements, then developed, brainstormed, presented, piloted, and published their game-based STEMSS lessons to share with their colleagues. Pre and post-surveys and focus groups were conducted to demonstrate an increase in knowledge, skills, and self-efficacy in using gamification to teach content in the classroom. Provide an engaging strategy (gamification) to support teaching content and language to linguistically diverse students in the K-12 classroom. Game-based learning supports informal language practice while developing academic vocabulary utilized in the game elements/content focus, building both content knowledge through play and language development through practice. The study also investigated teacher's increase in knowledge, skills, and self-efficacy in using games to teach language learners. Mixed methods were used to investigate knowledge, skills, and self-efficacy prior to and after the gamification teacher training (pre/post) and to understand the content and application of developing and utilizing game-based learning to teach. This study will contribute to the body of knowledge in applying game-based learning theories to the K-12 classroom to support English learners in developing English skills and STEMSS content knowledge.Keywords: gamification, teacher professional development, STEM, English learners, game-based learning
Procedia PDF Downloads 91748 Customer Experiences and Perspectives on Mobile Money Service Fraud: A Case Study of the University of Education, Winneba
Authors: Mavis Ofosuah Asante, Abena Abokoma Asemanyi, Belinda Osei-mensah, Stephen Osei Akyiaw
Abstract:
The study examined mobile money service fraud experiences and perspectives on control practices at University of Education, Winneba. The objectives of the study included to examine the forms of MoMo fraud strategies experienced by customers of MoMo on UEW Campus, to examine and classify the main perpetrators of the MoMo fraud among UEW students as well as the framework for fraud detection put together by the Telco’s and consumers on UEW Campus. The study adopted the case study research design. The purposive sampling technique was used to select the UEW Campus. Using the convenience sampling technique, five respondents were sampled for the study. The outcome of the in-depth interviews conducted revealed Mobile money fraud was committed in various forms, such as anonymous calls and text messages from scammers, fraudsters calling to deceive subscribers that they are to deliver goods from abroad or from a close relative under false pretexts. Finally, fraudsters sending false cash-out messages to merchants for authorization of which the physical cash is issued by the merchant to the fraudster without the equivalent e-cash. Mobile money fraud has been perpetuated in diverse forms such as mobile money network systems fraud, false promotion fraud, and reversal of erroneous transactions, fortuitous scams, and mobile money agents' fraud. Finally, the frameworks that have been used to detect mobile money fraud include the display of national identifies cards for the transaction, digital identification systems, the use of firewall to protect mobile money accounts, effective information technology architecture for mobile money services, reporting of mobile money fraud to telecoms and the sanctioning of mobile money fraudsters. The study suggested there should be public education and awareness creation on the activities of mobile money fraudsters in Ghana by telecommunication companies in conjunction with the National Communications Authority and the Bank of Ghana. The study, therefore, concluded that the menace of mobile money fraud threatens the integrity of the mobile money financial services.Keywords: mobile money, fraud, telecommunication, merchant
Procedia PDF Downloads 78747 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 281