Search results for: sensor node placement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2110

Search results for: sensor node placement

160 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 65
159 A Smartphone-Based Real-Time Activity Recognition and Fall Detection System

Authors: Manutchanok Jongprasithporn, Rawiphorn Srivilai, Paweena Pongsopha

Abstract:

Fall is the most serious accident leading to increased unintentional injuries and mortality. Falls are not only the cause of suffering and functional impairments to the individuals, but also the cause of increasing medical cost and days away from work. The early detection of falls could be an advantage to reduce fall-related injuries and consequences of falls. Smartphones, embedded accelerometer, have become a common device in everyday life due to decreasing technology cost. This paper explores a physical activity monitoring and fall detection application in smartphones which is a non-invasive biomedical device to determine physical activities and fall event. The combination of application and sensors could perform as a biomedical sensor to monitor physical activities and recognize a fall. We have chosen Android-based smartphone in this study since android operating system is an open-source and no cost. Moreover, android phone users become a majority of Thai’s smartphone users. We developed Thai 3 Axis (TH3AX) as a physical activities and fall detection application which included command, manual, results in Thai language. The smartphone was attached to right hip of 10 young, healthy adult subjects (5 males, 5 females; aged< 35y) to collect accelerometer and gyroscope data during performing physical activities (e.g., walking, running, sitting, and lying down) and falling to determine threshold for each activity. Dependent variables are including accelerometer data (acceleration, peak acceleration, average resultant acceleration, and time between peak acceleration). A repeated measures ANOVA was performed to test whether there are any differences between DVs’ means. Statistical analyses were considered significant at p<0.05. After finding threshold, the results were used as training data for a predictive model of activity recognition. In the future, accuracies of activity recognition will be performed to assess the overall performance of the classifier. Moreover, to help improve the quality of life, our system will be implemented with patients and elderly people who need intensive care in hospitals and nursing homes in Thailand.

Keywords: activity recognition, accelerometer, fall, gyroscope, smartphone

Procedia PDF Downloads 671
158 A Study on Adsorption Ability of MnO2 Nanoparticles to Remove Methyl Violet Dye from Aqueous Solution

Authors: Zh. Saffari, A. Naeimi, M. S. Ekrami-Kakhki, Kh. Khandan-Barani

Abstract:

The textile industries are becoming a major source of environmental contamination because an alarming amount of dye pollutants are generated during the dyeing processes. Organic dyes are one of the largest pollutants released into wastewater from textile and other industrial processes, which have shown severe impacts on human physiology. Nano-structure compounds have gained importance in this category due their anticipated high surface area and improved reactive sites. In recent years several novel adsorbents have been reported to possess great adsorption potential due to their enhanced adsorptive capacity. Nano-MnO2 has great potential applications in environment protection field and has gained importance in this category because it has a wide variety of structure with large surface area. The diverse structures, chemical properties of manganese oxides are taken advantage of in potential applications such as adsorbents, sensor catalysis and it is also used for wide catalytic applications, such as degradation of dyes. In this study, adsorption of Methyl Violet (MV) dye from aqueous solutions onto MnO2 nanoparticles (MNP) has been investigated. The surface characterization of these nano particles was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°) were also determined and evaluated.

Keywords: MnO2 nanoparticles, adsorption, methyl violet, isotherm models, kinetic models, surface chemistry

Procedia PDF Downloads 240
157 Effective Use of X-Box Kinect in Rehabilitation Centers of Riyadh

Authors: Reem Alshiha, Tanzila Saba

Abstract:

Physical rehabilitation is the process of helping people to recover and be able to go back to their former activities that have been delayed due to external factors such as car accidents, old age and victims of strokes (chronic diseases and accidents, and those related to sport activities).The cost of hiring a personal nurse or driving the patient to and from the hospital could be costly and time-consuming. Also, there are other factors to take into account such as forgetfulness, boredom and lack of motivation. In order to solve this dilemma, some experts came up with rehabilitation software to be used with Microsoft Kinect to help the patients and their families for in-home rehabilitation. In home rehabilitation software is becoming more and more popular, since it is more convenient for all parties affiliated with the patient. In contrast to the other costly market-based systems that have no portability, Microsoft’s Kinect is a portable motion sensor that reads body movements and interprets it. New software development has made rehabilitation games available to be used at home for the convenience of the patient. The game will benefit its users (rehabilitation patients) in saving time and money. There are many software's that are used with the Kinect for rehabilitation, but the software that is chosen in this research is Kinectotherapy. Kinectotherapy software is used for rehabilitation patients in Riyadh clinics to test its acceptance by patients and their physicians. In this study, we used Kinect because it was affordable, portable and easy to access in contrast to expensive market-based motion sensors. This paper explores the importance of in-home rehabilitation by using Kinect with Kinectotherapy software. The software targets both upper and lower limbs, but in this research, the main focus is on upper-limb functionality. However, the in-home rehabilitation is applicable to be used by all patients with motor disability, since the patient must have some self-reliance. The targeted subjects are patients with minor motor impairment that are somewhat independent in their mobility. The presented work is the first to consider the implementation of in-home rehabilitation with real-time feedback to the patient and physician. This research proposes the implementation of in-home rehabilitation in Riyadh, Saudi Arabia. The findings show that most of the patients are interested and motivated in using the in-home rehabilitation system in the future. The main value of the software application is due to these factors: improve patient engagement through stimulating rehabilitation, be a low cost rehabilitation tool and reduce the need for expensive one-to-one clinical contact. Rehabilitation is a crucial treatment that can improve the quality of life and confidence of the patient as well as their self-esteem.

Keywords: x-box, rehabilitation, physical therapy, rehabilitation software, kinect

Procedia PDF Downloads 322
156 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 75
155 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method

Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa

Abstract:

Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.

Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al

Procedia PDF Downloads 309
154 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 250
153 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport

Authors: Aditya Purohit, Neha Bansal

Abstract:

Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.

Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport

Procedia PDF Downloads 180
152 Challenging Airway Management for Tracheal Compression Due to a Rhabdomyosarcoma

Authors: Elena Parmentier, Henrik Endeman

Abstract:

Introduction: Large mediastinal masses often present with diagnostic and clinical challenges due to compression of the respiratory and hemodynamic system. We present a case of a mediastinal mass with symptomatic mechanical compression of the trachea, resulting in challenging airway management. Methods: We present a case of 66-year-old male, complaining of progressive dysphagia. Initial esophagogastroscopy revealed a stenosis secondary to external compression, biopsies were inconclusive. Additional CT scan showed a large mediastinal mass of unknown origin, situated between the vertebrae and esophagus. Symptoms progressed and patient developed dyspnea and stridor. A new CT showed quick growth of the mass with compression of the trachea, subglottic to just above the carina. A tracheal covered stent was successfully placed. Endobronchial ultrasound revealed a large irregular mass without tracheal invasion, biopsies were taken. 4 days after stent placement, the patients’ condition deteriorated with worsening of stridor, dyspnea and desaturation. Migration of the tracheal stent into the right main bronchus was seen on chest X ray, with obstruction of the left main bronchus and secondary atelectasis. Different methods have been described in the literature for tracheobronchial stent removal (surgical, endoscopic, fluoroscopyguided), our first choice in this case was flexible bronchoscopy. However, this revealed tracheal compression above the migrated stent and passage of the scope occurred impossible. Patient was admitted to the ICU, high-flow nasal oxygen therapy was started and the situation stabilized, giving time for extensive assessment and preparation of the airway management approach. Close cooperation between the intensivist, pulmonologist, anesthesiologist and otorhinolaryngologist was essential. Results: In case of sudden deterioration, a protocol for emergency situations was made. Given the increased risk of additional tracheal compression after administration of neuromuscular blocking agents, an approach with awake fiberoptic intubation maintaining spontaneous ventilation was proposed. However, intubation without retrieval of the tracheal stent was found undesirable due to expected massive shunting over the left atelectatic lung. As rescue option, assistance of extracorporeal circulation was considered and perfusionist was kept on standby. The patient stayed stable and was transferred to the operating theatre. High frequency jet ventilation under general anesthesia resulted in desaturations up to 50%, making rigid bronchoscopy impossible. Subsequently an endotracheal tube size 8 could be placed successfully and the stent could be retrieved via bronchoscopy over (and with) the tube, after which the patient was reintubated. Finally, a tracheostomy (Shiley™ Tracheostomy Tube With Cuff, size 8) was placed, fiberoptic control showed a patent airway. Patient was readmitted to the ICU and could be quickly weaned of the ventilator. Pathology was positive for rhabdomyosarcoma, without indication for systemic therapy. Extensive surgery (laryngectomy, esophagectomy) was suggested, but patient refused and palliative care was started. Conclusion: Due to meticulous planning in an interdisciplinary team, we showed a successful airway management approach in this complicated case of critical airway compression secondary to a rare rhabdomyosarcoma, complicated by tracheal stent migration. Besides presenting our thoughts and considerations, we support exploring other possible approaches of this specific clinical problem.

Keywords: airway management, rhabdomyosarcoma, stent displacement, tracheal stenosis

Procedia PDF Downloads 71
151 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. Today, cochlear implantation technology uses electrode array (EA) implanted manually into the cochlea. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to a severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small-scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit has provided tactile information from the digit-phantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have a potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction

Procedia PDF Downloads 368
150 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 50
149 A Long Range Wide Area Network-Based Smart Pest Monitoring System

Authors: Yun-Chung Yu, Yan-Wen Wang, Min-Sheng Liao, Joe-Air Jiang, Yuen-Chung Lee

Abstract:

This paper proposes to use a Long Range Wide Area Network (LoRaWAN) for a smart pest monitoring system which aims at the oriental fruit fly (Bactrocera dorsalis) to improve the communication efficiency of the system. The oriental fruit fly is one of the main pests in Southeast Asia and the Pacific Rim. Different smart pest monitoring systems based on the Internet of Things (IoT) architecture have been developed to solve problems of employing manual measurement. These systems often use Octopus II, a communication module following the 2.4GHz IEEE 802.15.4 ZigBee specification, as sensor nodes. The Octopus II is commonly used in low-power and short-distance communication. However, the energy consumption increase as the logical topology becomes more complicate to have enough coverage in the large area. By comparison, LoRaWAN follows the Low Power Wide Area Network (LPWAN) specification, which targets the key requirements of the IoT technology, such as secure bi-directional communication, mobility, and localization services. The LoRaWAN network has advantages of long range communication, high stability, and low energy consumption. The 433MHz LoRaWAN model has two superiorities over the 2.4GHz ZigBee model: greater diffraction and less interference. In this paper, The Octopus II module is replaced by a LoRa model to increase the coverage of the monitoring system, improve the communication performance, and prolong the network lifetime. The performance of the LoRa-based system is compared with a ZigBee-based system using three indexes: the packet receiving rate, delay time, and energy consumption, and the experiments are done in different settings (e.g. distances and environmental conditions). In the distance experiment, a pest monitoring system using the two communication specifications is deployed in an area with various obstacles, such as buildings and living creatures, and the performance of employing the two communication specifications is examined. The experiment results show that the packet receiving the rate of the LoRa-based system is 96% , which is much higher than that of the ZigBee system when the distance between any two modules is about 500m. These results indicate the capability of a LoRaWAN-based monitoring system in long range transmission and ensure the stability of the system.

Keywords: LoRaWan, oriental fruit fly, IoT, Octopus II

Procedia PDF Downloads 331
148 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment

Authors: Neda Orak, Mostafa Zarei

Abstract:

Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.

Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park

Procedia PDF Downloads 271
147 Residential Youth Care – Lessons Learned From A Cross-country Comparison Of Utilization Rates

Authors: Sigrid James

Abstract:

Purpose and Background: Despite a global policy push for deinstitutionalization, residential care for children and youth remains a relevant and highly utilized out-of-home care option in many countries, fulfilling functions of care and accommodation as well as education and treatment. While many youths are placed in residential care programs temporarily or during times of transition, some still spend years in programs that range from small group homes to large institutions. How residential care is used and what function it plays in child welfare systems is influenced by a range of factors. Among them are sociocultural and historical developments, available resources for child welfare, cultural notions about family, a lack of family-based placement alternatives as well as a belief that residential care can be beneficial to children. As part of a larger study that examined differences in residential care across 16 countries along a range of dimensions, this paper reports findings on utilization rates of residential care, i.e., the proportion of out out-of-home care dedicated to residential care relative to forms of family-based foster care. Method: Using an embedded multiple-case design study approach where a country represents a case, residential care in 16 countries was studied and compared. The comparison was focused on countries with developed social welfare systems and included Spain, Denmark, Germany, Ireland, the Netherlands, England, Scotland, Australia, Italy, Israel, Argentina, Portugal, Finland, France, the United States and Canada. Experts from each country systematically collected data on residential care based on a common matrix developed by the author. A range of sources were accessed depending on the information sought, including administrative data, government reports, research studies, etc. Utilization rates were mostly drawn from administrative data or government reports. While denominators may slightly differ, available data allowed for meaningful comparisons. Beyond descriptive data on utilization rates, analysis allowed to also capture trends in utilization (increasing, decreasing, stable) as well as the rate change. Results: Results indicate high variability in the utilization of residential care, covering the entire spectrum from a low of 7% to a high of 97%, with most countries falling somewhere in between. Three utilization categories were identified: high-users of residential care (Portugal, Argentina and Israel), medium-users (Denmark, France, Italy, Finland, Spain, Netherlands, Germany), and low-users (England, Scotland, Ireland, Canada, Australia, the United States). A number of countries experienced drastic reductions in residential care during the past few years (e.g. US), while others have seen stable rates (e.g., Portugal) or even increasing rates (e.g., Spain). Conclusions: Multiple contextual factors have to be considered when interpreting findings. For instance, countries with low residential care rates have, in most cases, undergone recent legislative changes to drastically reduce residential care. In medium-utilization countries, residential care reforms seem to be primarily focused on improving standards and, thus, the quality of care. High utilization countries generally face serious obstacles to implementing alternative family-based forms of out-of-home care. Cultural acceptance of residential or foster care and notions of professionalism also appear to play an important role in explaining variability in utilization.

Keywords: residential youth care, child welfare, case study, cross-national comparative research

Procedia PDF Downloads 48
146 Nurturing Minds, Shaping Futures: A Reflective Journey of 32 Years as a Teacher Educator

Authors: Mary Isobelle Mullaney

Abstract:

The maxim "an unexamined life is not worth living," attributed to Socrates, prompts a contemplative reflection spanning over 32 years as a teacher educator in the Republic of Ireland. Taking time to contemplate the changes that have occurred and the current landscape provides valuable insights into the dynamic terrain of teacher preparation. The reflective journey traverses the impacts of global and societal shifts, responding to challenges, embracing advancements, and navigating the delicate balance between responsiveness to the world and the active shaping of it. The transformative events of the COVID-19 pandemic spotlighted the indispensable role of teachers in Ireland, reinforcing the critical nature of education for the well-being of pupils. Research solidifies the understanding that teachers matter and so it is worth exploring the pivotal role of the teacher educator. This reflective piece examines the changes in teacher education and explores the juxtapositions that have emerged in response to three decades of profound change. The attractiveness of teaching as a career is juxtaposed against the reality of the demands of the job, with conditions for public servants in Ireland undergoing a shift. High-level strategic discussions about increasing teacher numbers now contrast with a previous oversupply. The delicate balance between the imperative to increase enrolment (getting "bums on seats") and the gatekeeper role of teacher educators is explored, raising questions about maintaining high standards amid changing student profiles. Another poignant dichotomy involves the high demand for teachers versus the hurdles candidates face in becoming teachers. The rising cost and duration of teacher education courses raise concerns about attracting quality candidates. The perceived attractiveness of teaching as a career contends with the reality of increased demands on educators. One notable juxtaposition centres around the rapid evolution of Irish initial teacher education versus the potential risk of change overload. The Teaching Council of Ireland has spearheaded considerable changes, raising questions about the timing and evaluation of these changes. This reflection contemplates the vision of a professional teaching council versus its evolving reality and the challenges posed by the value placed on school placement in teacher preparation. The juxtapositions extend to the classroom, where theory may not seamlessly align with the lived experience. Inconsistencies between college expectations and the classroom reality prompt reflection on the effectiveness of teacher preparation programs. Addressing the changing demographic landscape of society and schools, there is a persistent incongruity between the diversity of Irish society and the profile of second-level teachers. As education undergoes a digital revolution, the enduring philosophies of education confront technological advances. This reflection highlights the tension between established practices and contemporary demands, acknowledging the irreplaceable value of face-to-face interaction while integrating technology into teacher training programs. In conclusion, this reflective journey encapsulates the intricate web of juxtapositions in Irish Initial Teacher Education. It emphasises the enduring commitment to fostering education, recognising the profound influence educators wield, and acknowledging the challenges and gratifications inherent in shaping the minds and futures of generations to come.

Keywords: Irish post primary teaching, juxtapositions, reflection, teacher education

Procedia PDF Downloads 30
145 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 462
144 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 112
143 The Lessons Learned from Managing Malignant Melanoma During COVID-19 in a Plastic Surgery Unit in Ireland

Authors: Amenah Dhannoon, Ciaran Martin Hurley, Laura Wrafter, Podraic J. Regan

Abstract:

Introduction: The COVID-19 pandemic continues to present unprecedented challenges for healthcare systems. This has resulted in the pragmatic shift in the practice of plastic surgery units worldwide. During this period, many units reported a significant fall in urgent melanoma referrals, leading to patients presenting with advanced disease requiring more extensive surgery and inferior outcomes. Our objective was to evaluate our unit's experience with both non-invasive and invasive melanoma during the COVID-19 pandemic and characterize our experience and contrast it to that experienced by our neighbors in the UK, mainland Europe and North America. Methods: a retrospective chart review was performed on all patients diagnosed with invasive and non-invasive cutaneous melanoma between March to December of 2019 (control) compared to 2020 (COVID-19 pandemic) in a single plastic surgery unit in Ireland. Patient demographics, referral source, surgical procedures, tumour characteristics, radiological findings, oncological therapies and follow-up were recorded. All data were anonymized and stored in Microsoft Excel. Results: A total of 589 patients were included in the study. Of these, 314 (53%) with invasive melanoma, compared to 275 (47%) with the non-invasive disease. Overall, more patients were diagnosed with both invasive and non-invasive melanoma in 2020 than in 2019 (p<0.05). However, significantly longer waiting times in 2020 (64 days) compared to 2019 (28 days) (p<0.05), with the majority of the referral being from GP in 2019 (83%) compared to 61% in 2020. Positive sentinel lymph node were higher in 2019 at 56% (n=28) compared to 24% (n=22) in 2020. There was no statistically significant difference in the tutor characteristics or metastasis status. Discussion: While other countries have noticed a fall in the melanoma diagnosis. Our units experienced a higher number of disease diagnoses. This can be due to multiple reasons. In Ireland, the government reached an early agreement with the private sector to continue elective surgery on an urgent basis in private hospitals. This allowed access to local anesthetic procedures and local skin cancer cases were triaged to non-COVID-19 provider centers. Our unit also adapted a fast, effective and minimal patient contact strategy for triaging skin cancer based on telemedicine. Thirdly, a skin cancer nurse specialist maintained patient follow-ups and triaging a dedicated email service. Finally, our plastic surgery service continued to maintain a virtual complex skin cancer multidisciplinary team meeting during the pandemic, ensuring local clinical governance has adhered to each clinical case. Conclusion: Our study highlights that with the prompt efficient restructuring of services, we could reserve successful management of skin cancer even in the most devastating times. It is important to reflect on the success during the pandemic and emphasize the importance of preparation for a potentially difficult future

Keywords: malignant melanoma, skin cancer, COVID-19, triage

Procedia PDF Downloads 152
142 The Second Generation of Tyrosine Kinase Inhibitor Afatinib Controls Inflammation by Regulating NLRP3 Inflammasome Activation

Authors: Shujun Xie, Shirong Zhang, Shenglin Ma

Abstract:

Background: Chronic inflammation might lead to many malignancies, and inadequate resolution could play a crucial role in tumor invasion, progression, and metastases. A randomised, double-blind, placebo-controlled trial shows that IL-1β inhibition with canakinumab could reduce incident lung cancer and lung cancer mortality in patients with atherosclerosis. The process and secretion of proinflammatory cytokine IL-1β are controlled by the inflammasome. Here we showed the correlation of the innate immune system and afatinib, a tyrosine kinase inhibitor targeting epidermal growth factor receptor (EGFR) in non-small cell lung cancer. Methods: Murine Bone marrow derived macrophages (BMDMs), peritoneal macrophages (PMs) and THP-1 were used to check the effect of afatinib on the activation of NLRP3 inflammasome. The assembly of NLRP3 inflammasome was check by co-immunoprecipitation of NLRP3 and apoptosis-associated speck-like protein containing CARD (ASC), disuccinimidyl suberate (DSS)-cross link of ASC. Lipopolysaccharide (LPS)-induced sepsis and Alum-induced peritonitis were conducted to confirm that afatinib could inhibit the activation of NLRP3 in vivo. Peripheral blood mononuclear cells (PBMCs) from non-small cell lung cancer (NSCLC) patients before or after taking afatinib were used to check that afatinib inhibits inflammation in NSCLC therapy. Results: Our data showed that afatinib could inhibit the secretion of IL-1β in a dose-dependent manner in macrophage. Moreover, afatinib could inhibit the maturation of IL-1β and caspase-1 without affecting the precursors of IL-1β and caspase-1. Next, we found that afatinib could block the assembly of NLRP3 inflammasome and the ASC speck by blocking the interaction of the sensor protein NLRP3 and the adaptor protein ASC. We also found that afatinib was able to alleviate the LPS-induced sepsis in vivo. Conclusion: Our study found that afatinib could inhibit the activation of NLRP3 inflammasome in macrophage, providing new evidence that afatinib could target the innate immune system to control chronic inflammation. These investigations will provide significant experimental evidence in afatinib as therapeutic drug for non-small cell lung cancer or other tumors and NLRP3-related diseases and will explore new targets for afatinib.

Keywords: inflammasome, afatinib, inflammation, tyrosine kinase inhibitor

Procedia PDF Downloads 100
141 Recurrent Torsades de Pointes Post Direct Current Cardioversion for Atrial Fibrillation with Rapid Ventricular Response

Authors: Taikchan Lildar, Ayesha Samad, Suraj Sookhu

Abstract:

Atrial fibrillation with rapid ventricular response results in the loss of atrial kick and shortened ventricular filling time, which often leads to decompensated heart failure. Pharmacologic rhythm control is the treatment of choice, and patients frequently benefit from the restoration of sinus rhythm. When pharmacologic treatment is unsuccessful or a patient declines hemodynamically, direct cardioversion is the treatment of choice. Torsades de pointes or “twisting of the points'' in French, is a rare but under-appreciated risk of cardioversion therapy and accounts for a significant number of sudden cardiac death each year. A 61-year-old female with no significant past medical history presented to the Emergency Department with worsening dyspnea. An electrocardiogram showed atrial fibrillation with rapid ventricular response, and a chest X-ray was significant for bilateral pulmonary vascular congestion. Full-dose anticoagulation and diuresis were initiated with moderate improvement in symptoms. A transthoracic echocardiogram revealed biventricular systolic dysfunction with a left ventricular ejection fraction of 30%. After consultation with an electrophysiologist, the consensus was to proceed with the restoration of sinus rhythm, which would likely improve the patient’s heart failure symptoms and possibly the ejection fraction. A transesophageal echocardiogram was negative for left atrial appendage thrombus; the patient was treated with a loading dose of amiodarone and underwent successful direct current cardioversion with 200 Joules. The patient was placed on telemetry monitoring for 24 hours and was noted to have frequent premature ventricular contractions with subsequent degeneration to torsades de pointes. The patient was found unresponsive and pulseless; cardiopulmonary resuscitation was initiated with cardioversion, and return of spontaneous circulation was achieved after four minutes to normal sinus rhythm. Post-cardiac arrest electrocardiogram showed sinus bradycardia with heart-rate corrected QT interval of 592 milliseconds. The patient continued to have frequent premature ventricular contractions and required two additional cardioversions to achieve a return of spontaneous circulation with intravenous magnesium and lidocaine. An automatic implantable cardioverter-defibrillator was subsequently implanted for secondary prevention of sudden cardiac death. The backup pacing rate of the automatic implantable cardioverter-defibrillator was set higher than usual in an attempt to prevent premature ventricular contractions-induced torsades de pointes. The patient did not have any further ventricular arrhythmias after implantation of the automatic implantable cardioverter-defibrillator. Overdrive pacing is a method utilized to treat premature ventricular contractions-induced torsades de pointes by preventing a patient’s susceptibility to R on T-wave-induced ventricular arrhythmias. Pacing at a rate of 90 beats per minute succeeded in controlling the arrhythmia without the need for traumatic cardiac defibrillation. In our patient, conversion of atrial fibrillation with rapid ventricular response to normal sinus rhythm resulted in a slower heart rate and an increased probability of premature ventricular contraction occurring on the T-wave and ensuing ventricular arrhythmia. This case highlights direct current cardioversion for atrial fibrillation with rapid ventricular response resulting in persistent ventricular arrhythmia requiring an automatic implantable cardioverter-defibrillator placement with overdrive pacing to prevent a recurrence.

Keywords: refractory atrial fibrillation, atrial fibrillation, overdrive pacing, torsades de pointes

Procedia PDF Downloads 113
140 Social Network Roles in Organizations: Influencers, Bridges, and Soloists

Authors: Sofia Dokuka, Liz Lockhart, Alex Furman

Abstract:

Organizational hierarchy, traditionally composed of individual contributors, middle management, and executives, is enhanced by the understanding of informal social roles. These roles, identified with organizational network analysis (ONA), might have an important effect on organizational functioning. In this paper, we identify three social roles – influencers, bridges, and soloists, and provide empirical analysis based on real-world organizational networks. Influencers are employees with broad networks and whose contacts also have rich networks. Influence is calculated using PageRank, initially proposed for measuring website importance, but now applied in various network settings, including social networks. Influencers, having high PageRank, become key players in shaping opinions and behaviors within an organization. Bridges serve as links between loosely connected groups within the organization. Bridges are identified using betweenness and Burt’s constraint. Betweenness quantifies a node's control over information flows by evaluating its role in the control over the shortest paths within the network. Burt's constraint measures the extent of interconnection among an individual's contacts. A high constraint value suggests fewer structural holes and lesser control over information flows, whereas a low value suggests the contrary. Soloists are individuals with fewer than 5 stable social contacts, potentially facing challenges due to reduced social interaction and hypothetical lack of feedback and communication. We considered social roles in the analysis of real-world organizations (N=1,060). Based on data from digital traces (Slack, corporate email and calendar) we reconstructed an organizational communication network and identified influencers, bridges and soloists. We also collected employee engagement data through an online survey. Among the top-5% of influencers, 10% are members of the Executive Team. 56% of the Executive Team members are part of the top influencers group. The same proportion of top influencers (10%) is individual contributors, accounting for just 0.6% of all individual contributors in the company. The majority of influencers (80%) are at the middle management level. Out of all middle managers, 19% hold the role of influencers. However, individual contributors represent a small proportion of influencers, and having information about these individuals who hold influential roles can be crucial for management in identifying high-potential talents. Among the bridges, 4% are members of the Executive Team, 16% are individual contributors, and 80% are middle management. Predominantly middle management acts as a bridge. Bridge positions of some members of the executive team might indicate potential micromanagement on the leader's part. Recognizing the individuals serving as bridges in an organization uncovers potential communication problems. The majority of soloists are individual contributors (96%), and 4% of soloists are from middle management. These managers might face communication difficulties. We found an association between being an influencer and attitude toward a company's direction. There is a statistically significant 20% higher perception that the company is headed in the right direction among influencers compared to non-influencers (p < 0.05, Mann-Whitney test). Taken together, we demonstrate that considering social roles in the company might indicate both positive and negative aspects of organizational functioning that should be considered in data-driven decision-making.

Keywords: organizational network analysis, social roles, influencer, bridge, soloist

Procedia PDF Downloads 79
139 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 135
138 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 369
137 Geochemical Characterization of Geothermal Waters in Albania, Preliminary Results

Authors: Aurela Jahja, Katarzyna Wątor, Arjan Beqiraj, Piotr Rusiniak, Nevton Kodhelaj

Abstract:

Albanian geological terrains represent an important node of the Alpine – Mediterranean mountain belt and are divided into several predominantly NNW - SSE striking geotectonic units, which, based on the presence or lack of Cretaceous transgression and magmatic rocks, belong to Internal or External Albanides. The internal (Korabi, Mirdita and Gashi) units are characterized by the Lower Cretaceous discordance and the presence of abundant magmatic rocks whereas in the external (Alps, Krasta-Cukali, Kruja, Ionian, Sazani and Peri Adriatic Depression) units an almost continuous sedimentation from Triassic to Paleogene is evidenced. The internal and external units show relevant differences in both geothermal and heat flow density values. The gradient values vary from 15-21.3 to 36 mK/m, while the heat flow density ranges from 42 to 60 mW/m2, in the external (Preadriatic Depression) and internal (ophiolitic belt) units, respectively. The geothermal fluids, which are found in natural springs and deep oil wells of Albania, are located in four thermo-mineral provinces: a) Peshkopi (Korabi) province; b) Kruja province; c) Preadriatic basin province, and d) South Ionian province. Thirteen geothermal waters were sampled from 11 natural springs and 2 deep wells, of which 6 springs and 2 wells from Kruja, 1 spring from Peshkopia, 2 springs from Preadriatic basin and 2 springs South Ionian province. Temperature, pH and Electrical Conductivity were measured in situ, while in laboratory were analyzed by ICP method major anions and cations and several trace elements (B, Li, Sr, Rb, I, Br, etc.). The measured values of temperature, pH and electrical conductivity range within 17-63°C, 6.26-7.92 and 724- 26856µS/cm intervals, respectively. The chemical type of the Albania thermal waters is variable. In the Kruja province prevail the Cl-SO4-NaCa and Cl-Na-Ca water types; while SO4-Ca, HCO3-Ca and Cl-HCO3-Na-Ca, and Cl-Na are found in the provinces of Peshkopi, Ionian and Preadriatic basin, respectively. In the Cl-SO4-HCO3 triangular diagram most of the geothermal waters are close to the chloride corner that belong to “mature waters”, typical of geothermal deep and hot fluids. Only samples from the Ionian province are located within the region of high bicarbonate concentration and they can be classified as peripheral waters that may have mixed with cold groundwater. In the Na-Ca-Mg and Na-K-Mg triangular diagram the majority of waters fall in the corner of sodium, suggesting that their cation ratios are controlled by mineral-solution equilibrium. There is a linear relationship between Cl and B which indicates the mixing of geothermal water with cold water, where the low-chlorine thermal waters from Ionian basin and Preadriatic depression provinces are distinguished by high-chlorine thermal waters from Kruja province. The Cl/Br molar ration of the thermal waters from Kruja province ranges from 1000 to 2660 and separates them from the thermal waters of Ionian basin and Preadriatic depression provinces having Cl/Br molar ratio lower than 650. The apparent increase of Cl/Br molar ratio that correlates with the increasing of the chloride, is probably related with dissolution of the Halite.

Keywords: geothermal fluids, geotectonic units, natural springs, deep wells, mature waters, peripheral waters

Procedia PDF Downloads 197
136 Minimizing Unscheduled Maintenance from an Aircraft and Rolling Stock Maintenance Perspective: Preventive Maintenance Model

Authors: Adel A. Ghobbar, Varun Raman

Abstract:

The Corrective maintenance of components and systems is a problem plaguing almost every industry in the world today. Train operators’ and the maintenance repair and overhaul subsidiary of the Dutch railway company is also facing this problem. A considerable portion of the maintenance activities carried out by the company are unscheduled. This, in turn, severely stresses and stretches the workforce and resources available. One possible solution is to have a robust preventive maintenance plan. The other possible solution is to plan maintenance based on real-time data obtained from sensor-based ‘Health and Usage Monitoring Systems.’ The former has been investigated in this paper. The preventive maintenance model developed for train operator will subsequently be extended, to tackle the unscheduled maintenance problem also affecting the aerospace industry. The extension of the model to the aerospace sector will be dealt with in the second part of the research, and it would, in turn, validate the soundness of the model developed. Thus, there are distinct areas that will be addressed in this paper, including the mathematical modelling of preventive maintenance and optimization based on cost and system availability. The results of this research will help an organization to choose the right maintenance strategy, allowing it to save considerable sums of money as opposed to overspending under the guise of maintaining high asset availability. The concept of delay time modelling was used to address the practical problem of unscheduled maintenance in this paper. The delay time modelling can be used to help with support planning for a given asset. The model was run using MATLAB, and the results are shown that the ideal inspection intervals computed using the extended from a minimal cost perspective were 29 days, and from a minimum downtime, perspective was 14 days. Risk matrix integration was constructed to represent the risk in terms of the probability of a fault leading to breakdown maintenance and its consequences in terms of maintenance cost. Thus, the choice of an optimal inspection interval of 29 days, resulted in a cost of approximately 50 Euros and the corresponding value of b(T) was 0.011. These values ensure that the risk associated with component X being maintained at an inspection interval of 29 days is more than acceptable. Thus, a switch in maintenance frequency from 90 days to 29 days would be optimal from the point of view of cost, downtime and risk.

Keywords: delay time modelling, unscheduled maintenance, reliability, maintainability, availability

Procedia PDF Downloads 113
135 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 107
134 Tunnel Convergence Monitoring by Distributed Fiber Optics Embedded into Concrete

Authors: R. Farhoud, G. Hermand, S. Delepine-lesoille

Abstract:

Future underground facility of French radioactive waste disposal, named Cigeo, is designed to store intermediate and high level - long-lived French radioactive waste. Intermediate level waste cells are tunnel-like, about 400m length and 65 m² section, equipped with several concrete layers, which can be grouted in situ or composed of tunnel elements pre-grouted. The operating space into cells, to allow putting or removing waste containers, should be monitored for several decades without any maintenance. To provide the required information, design was performed and tested in situ in Andra’s underground laboratory (URL) at 500m under the surface. Based on distributed optic fiber sensors (OFS) and backscattered Brillouin for strain and Raman for temperature interrogation technics, the design consists of 2 loops of OFS, at 2 different radiuses, around the monitored section (Orthoradiale strains) and longitudinally. Strains measured by distributed OFS cables were compared to classical vibrating wire extensometers (VWE) and platinum probes (Pt). The OFS cables were composed of 2 cables sensitive to strains and temperatures and one only for temperatures. All cables were connected, between sensitive part and instruments, to hybrid cables to reduce cost. The connection has been made according to 2 technics: splicing fibers in situ after installation or preparing each fiber with a connector and only plugging them together in situ. Another challenge was installing OFS cables along a tunnel mad in several parts, without interruption along several parts. First success consists of the survival rate of sensors after installation and quality of measurements. Indeed, 100% of OFS cables, intended for long-term monitoring, survived installation. Few new configurations were tested with relative success. Measurements obtained were very promising. Indeed, after 3 years of data, no difference was observed between cables and connection methods of OFS and strains fit well with VWE and Pt placed at the same location. Data, from Brillouin instrument sensitive to strains and temperatures, were compensated with data provided by Raman instrument only sensitive to temperature and into a separated fiber. These results provide confidence in the next steps of the qualification processes which consists of testing several data treatment approach for direct analyses.

Keywords: monitoring, fiber optic, sensor, data treatment

Procedia PDF Downloads 112
133 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 281
132 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 194
131 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 179