Search results for: fifth-generation district heating network
4688 Why is the Recurrence Rate of Residual or Recurrent Disease Following Endoscopic Mucosal Resection (EMR) of the Oesophageal Dysplasia’s and T1 Tumours Higher in the Greater Midlands Cancer Network?
Authors: Harshadkumar Rajgor, Jeff Butterworth
Abstract:
Background: Barretts oesophagus increases the risk of developing oesophageal adenocarcinoma. Over the last 40 years, there has been a 6 fold increase in the incidence of oesophageal adenocarcinoma in the western world and the incidence rates are increasing at a greater rate than cancers of the colon, breast and lung. Endoscopic mucosal resection (EMR) is a relatively new technique being used by 2 centres in the greater midlands cancer network. EMR can be used for curative or staging purposes, for high-grade dysplasia’s and T1 tumours of the oesophagus. EMR is also suitable for those who are deemed high risk for oesophagectomy. EMR has a recurrence rate of 21% according to the Wiesbaden data. Method: A retrospective study of prospectively collected data was carried out involving 24 patients who had EMR for curative or staging purposes. Complications of residual or recurrent disease following EMR that required further treatment were investigated. Results: In 54% of cases residual or recurrent disease was suspected. 96% of patients were given clear and concise information regarding their diagnosis of high-grade dysplasia or T1 tumours. All 24 patients consulted the same specialist healthcare team. Conclusion: EMR is a safe and effective treatment for patients who have high-grade dysplasia and T1NO tumours. In 54% of cases residual or recurrent disease was suspected. Initially, only single resections were undertaken. Multiple resections are now being carried out to reduce the risk of recurrence. Complications from EMR remain low in this series and consisted of a single episode of post procedural bleeding.Keywords: endoscopic mucosal resection, oesophageal dysplasia, T1 tumours, cancer network
Procedia PDF Downloads 3164687 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.Keywords: multimodal transportation, demand modeling, travel behavior, statistical models
Procedia PDF Downloads 1734686 Scientific Recommender Systems Based on Neural Topic Model
Authors: Smail Boussaadi, Hassina Aliane
Abstract:
With the rapid growth of scientific literature, it is becoming increasingly challenging for researchers to keep up with the latest findings in their fields. Academic, professional networks play an essential role in connecting researchers and disseminating knowledge. To improve the user experience within these networks, we need effective article recommendation systems that provide personalized content.Current recommendation systems often rely on collaborative filtering or content-based techniques. However, these methods have limitations, such as the cold start problem and difficulty in capturing semantic relationships between articles. To overcome these challenges, we propose a new approach that combines BERTopic (Bidirectional Encoder Representations from Transformers), a state-of-the-art topic modeling technique, with community detection algorithms in a academic, professional network. Experiences confirm our performance expectations by showing good relevance and objectivity in the results.Keywords: scientific articles, community detection, academic social network, recommender systems, neural topic model
Procedia PDF Downloads 974685 Conjugate Free Convection in a Square Cavity Filled with Nanofluid and Heated from Below by Spatial Wall Temperature
Authors: Ishak Hashim, Ammar Alsabery
Abstract:
The problem of conjugate free convection in a square cavity filled with nanofluid and heated from below by spatial wall temperature is studied numerically using the finite difference method. Water-based nanofluid with copper nanoparticles are chosen for the investigation. Governing equations are solved over a wide range of nanoparticle volume fraction (0 ≤ φ ≤ 0.2), wave number ((0 ≤ λ ≤ 4) and thermal conductivity ratio (0.44 ≤ Kr ≤ 6). The results presented for values of the governing parameters in terms of streamlines, isotherms and average Nusselt number. It is found that the flow behavior and the heat distribution are clearly enhanced with the increment of the non-uniform heating.Keywords: conjugate free convection, square cavity, nanofluid, spatial temperature
Procedia PDF Downloads 3594684 Intrusion Detection Using Dual Artificial Techniques
Authors: Rana I. Abdulghani, Amera I. Melhum
Abstract:
With the abnormal growth of the usage of computers over networks and under the consideration or agreement of most of the computer security experts who said that the goal of building a secure system is never achieved effectively, all these points led to the design of the intrusion detection systems(IDS). This research adopts a comparison between two techniques for network intrusion detection, The first one used the (Particles Swarm Optimization) that fall within the field (Swarm Intelligence). In this Act, the algorithm Enhanced for the purpose of obtaining the minimum error rate by amending the cluster centers when better fitness function is found through the training stages. Results show that this modification gives more efficient exploration of the original algorithm. The second algorithm used a (Back propagation NN) algorithm. Finally a comparison between the results of two methods used were based on (NSL_KDD) data sets for the construction and evaluation of intrusion detection systems. This research is only interested in clustering the two categories (Normal and Abnormal) for the given connection records. Practices experiments result in intrude detection rate (99.183818%) for EPSO and intrude detection rate (69.446416%) for BP neural network.Keywords: IDS, SI, BP, NSL_KDD, PSO
Procedia PDF Downloads 3824683 Traffic Light Detection Using Image Segmentation
Authors: Vaishnavi Shivde, Shrishti Sinha, Trapti Mishra
Abstract:
Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture).Keywords: traffic light detection, image segmentation, machine learning, classification, convolutional neural networks
Procedia PDF Downloads 1734682 Neural Network Based Compressor Flow Estimator in an Aircraft Vapor Cycle System
Authors: Justin Reverdi, Sixin Zhang, Serge Gratton, Said Aoues, Thomas Pellegrini
Abstract:
In Vapor Cycle Systems, the flow sensor plays a key role in different monitoring and control purposes. However, physical sensors can be expensive, inaccurate, heavy, cumbersome, or highly sensitive to vibrations, which is especially problematic when embedded into an aircraft. The conception of a virtual sensor based on other standard sensors is a good alternative. In this paper, a data-driven model using a Convolutional Neural Network is proposed to estimate the flow of the compressor. To fit the model to our dataset, we tested different loss functions. We show in our application that a Dynamic Time Warping based loss function called DILATE leads to better dynamical performance than the vanilla mean squared error (MSE) loss function. DILATE allows choosing a trade-off between static and dynamic performance.Keywords: deep learning, dynamic time warping, vapor cycle system, virtual sensor
Procedia PDF Downloads 1464681 Increasing Number of NGOs and Their Conduct: A Case Study of Far Western Region of Nepal
Authors: Raju Thapa
Abstract:
Non-Governmental Organizations (NGOs) are conducting activities in Nepal with the overall objective to strengthen peace, progress and prosperity in the society. Based on the research objectives, this study has tried to trace out the reasons behind massive growth of NGOs and the trends that have shaped the handling and functioning of NGOs in the Kailali district. The outcomes of this research are quite embarrassing for NGOs officials. Based on the findings of this research, NGOs are expected to review their guiding principal, integrity and conduct for the betterment of the society.Keywords: NGO, trends, increasing, conduct, integrity, guiding principle, legal, governance, human resources, public trust, financial, collaboration, networking
Procedia PDF Downloads 4124680 Human-Centric Sensor Networks for Comfort and Productivity in Offices: Integrating Environmental, Body Area Network, and Participatory Sensing
Authors: Chenlu Zhang, Wanni Zhang, Florian Schaule
Abstract:
Indoor environment in office buildings directly affects comfort, productivity, health, and well-being of building occupants. Wireless environmental sensor networks have been deployed in many modern offices to monitor and control the indoor environments. However, indoor environmental variables are not strong enough predictors of comfort and productivity levels of every occupant due to personal differences, both physiologically and psychologically. This study proposes human-centric sensor networks that integrate wireless environmental sensors, body area network sensors and participatory sensing technologies to collect data from both environment and human and support building operations. The sensor networks have been tested in one small-size and one medium-size office rooms with 22 participants for five months. Indoor environmental data (e.g., air temperature and relative humidity), physiological data (e.g., skin temperature and Galvani skin response), and physiological responses (e.g., comfort and self-reported productivity levels) were obtained from each participant and his/her workplace. The data results show that: (1) participants have different physiological and physiological responses in the same environmental conditions; (2) physiological variables are more effective predictors of comfort and productivity levels than environmental variables. These results indicate that the human-centric sensor networks can support human-centric building control and improve comfort and productivity in offices.Keywords: body area network, comfort and productivity, human-centric sensors, internet of things, participatory sensing
Procedia PDF Downloads 1394679 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature
Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi
Abstract:
The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.Keywords: hardness, powder metallurgy, spark plasma sintering, wear
Procedia PDF Downloads 2734678 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi
Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza
Abstract:
Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards
Procedia PDF Downloads 1164677 Low-Noise Amplifier Design for Improvement of Communication Range for Wake-Up Receiver Based Wireless Sensor Network Application
Authors: Ilef Ketata, Mohamed Khalil Baazaoui, Robert Fromm, Ahmad Fakhfakh, Faouzi Derbel
Abstract:
The integration of wireless communication, e. g. in real-or quasi-real-time applications, is related to many challenges such as energy consumption, communication range, latency, quality of service, and reliability. To minimize the latency without increasing energy consumption, wake-up receiver (WuRx) nodes have been introduced in recent works. Low-noise amplifiers (LNAs) are introduced to improve the WuRx sensitivity but increase the supply current severely. Different WuRx approaches exist with always-on, power-gated, or duty-cycled receiver designs. This paper presents a comparative study for improving communication range and decreasing the energy consumption of wireless sensor nodes.Keywords: wireless sensor network, wake-up receiver, duty-cycled, low-noise amplifier, envelope detector, range study
Procedia PDF Downloads 1134676 Development of Palm Kernel Shell Lightweight Masonry Mortar
Authors: Kazeem K. Adewole
Abstract:
There need to construct building walls with lightweight masonry bricks/blocks and mortar to reduce the weight and cost of cooling/heating of buildings in hot/cold climates is growing partly due to legislations on energy use and global warming. In this paper, the development of Palm Kernel Shell masonry mortar (PKSMM) prepared with Portland cement and crushed PKS fine aggregate (an agricultural waste) is demonstrated. We show that PKSMM can be used as a lightweight mortar for the construction of lightweight masonry walls with good thermal insulation efficiency than the natural river sand commonly used for masonry mortar production.Keywords: building walls, fine aggregate, lightweight masonry mortar, palm kernel shell, wall thermal insulation efficacy
Procedia PDF Downloads 3204675 Design of Compact Dual-Band Planar Antenna for WLAN Systems
Authors: Anil Kumar Pandey
Abstract:
A compact planar monopole antenna with dual-band operation suitable for wireless local area network (WLAN) application is presented in this paper. The antenna occupies an overall area of 18 ×12 mm2. The antenna is fed by a coplanar waveguide (CPW) transmission line and it combines two folded strips, which radiates at 2.4 and 5.2 GHz. In the proposed antenna, by optimally selecting the antenna dimensions, dual-band resonant modes with a much wider impedance matching at the higher band can be produced. Prototypes of the obtained optimized design have been simulated using EM solver. The simulated results explore good dual-band operation with -10 dB impedance bandwidths of 50 MHz and 2400 MHz at bands of 2.4 and 5.2 GHz, respectively, which cover the 2.4/5.2/5.8 GHz WLAN operating bands. Good antenna performances such as radiation patterns and antenna gains over the operating bands have also been observed. The antenna with a compact size of 18×12×1.6 mm3 is designed on an FR4 substrate with a dielectric constant of 4.4.Keywords: CPW antenna, dual-band, electromagnetic simulation, wireless local area network (WLAN)
Procedia PDF Downloads 2094674 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 1864673 Application of Artificial Neural Network in Initiating Cleaning Of Photovoltaic Solar Panels
Authors: Mohamed Mokhtar, Mostafa F. Shaaban
Abstract:
Among the challenges facing solar photovoltaic (PV) systems in the United Arab Emirates (UAE), dust accumulation on solar panels is considered the most severe problem that faces the growth of solar power plants. The accumulation of dust on the solar panels significantly degrades output from these panels. Hence, solar PV panels have to be cleaned manually or using costly automated cleaning methods. This paper focuses on initiating cleaning actions when required to reduce maintenance costs. The cleaning actions are triggered only when the dust level exceeds a threshold value. The amount of dust accumulated on the PV panels is estimated using an artificial neural network (ANN). Experiments are conducted to collect the required data, which are used in the training of the ANN model. Then, this ANN model will be fed by the output power from solar panels, ambient temperature, and solar irradiance, and thus, it will be able to estimate the amount of dust accumulated on solar panels at these conditions. The model was tested on different case studies to confirm the accuracy of the developed model.Keywords: machine learning, dust, PV panels, renewable energy
Procedia PDF Downloads 1444672 Emissivity Analysis of Hot-Dip Galvanized Steel in Fire
Authors: Christian Gaigl, Martin Mensinger
Abstract:
Once a fire resistance rating is necessary, it has to be proofed that the load bearing behavior of a steel construction under the exposure of fire still fits the static demands. High costs of passive fire protection, which satisfies the requirements, frequently result in a concrete solution. To optimize these expenses, one method is to determine the critical temperature according to the Eurocode DIN EN 1993-1-2. For this purpose, positive effects of hot-dip galvanized surface layers on the temperature development of steel members in the accidental situation of fire exposure has been investigated. The test results show a significant better heating behavior of hot-dip galvanized steel components compared to normal steel specimen. This leads in many cases to a R30 (30 minutes of ISO-fire) fire protection requirement of unprotected steel members and therefore to an economic added value.Keywords: fire resistance, hot-dip galvanizing, steel constructions, R30 requirement, emissivity
Procedia PDF Downloads 2624671 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 1604670 Relay Mining: Verifiable Multi-Tenant Distributed Rate Limiting
Authors: Daniel Olshansky, Ramiro Rodrıguez Colmeiro
Abstract:
Relay Mining presents a scalable solution employing probabilistic mechanisms and crypto-economic incentives to estimate RPC volume usage, facilitating decentralized multitenant rate limiting. Network traffic from individual applications can be concurrently serviced by multiple RPC service providers, with costs, rewards, and rate limiting governed by a native cryptocurrency on a distributed ledger. Building upon established research in token bucket algorithms and distributed rate-limiting penalty models, our approach harnesses a feedback loop control mechanism to adjust the difficulty of mining relay rewards, dynamically scaling with network usage growth. By leveraging crypto-economic incentives, we reduce coordination overhead costs and introduce a mechanism for providing RPC services that are both geopolitically and geographically distributed.Keywords: remote procedure call, crypto-economic, commit-reveal, decentralization, scalability, blockchain, rate limiting, token bucket
Procedia PDF Downloads 544669 Alternator Fault Detection Using Wigner-Ville Distribution
Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi
Abstract:
This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution
Procedia PDF Downloads 3744668 Exploring Stakeholders’ Perceptions of the Implementation of the Door-to-Door Vaccination Campaign for the Oral Polio Vaccine (NOPV2) In Uganda: A Qualitative Study
Authors: Elizabeth B. Katana, Brenda N. Simbwa, Josephine Namayanja, Bob O. Amodan, Edirisa J. Nsubuga, Eva A. O. Laker
Abstract:
Background: Understanding stakeholders’ perceptions towards the implementation of a mass vaccination campaign is important to ensure the design of better strategies to address challenges. We explored stakeholders’ perceptions of the implementation of a nationwide door-to-door mass vaccination campaign for the oral polio vaccine (nOPV2) in Uganda for the two rounds that occurred in January and November 2022. Methods: A qualitative study was conducted among stakeholders who participated in the campaign implementation from 8 districts in Uganda using random sampling. We conducted 46 In-depth interviews lasting 30 – 40 minutes with 6 national/central supervisors, 12 district, 14 sub-county, and 14 parish-level supervisors. Stakeholders were asked about their experiences in the campaign implementation, including challenges faced and their opinions of the campaign impact and use of the door-to-door strategy. Data were analyzed thematically in line with the major campaign activities. Results: Most of the stakeholders were primarily concerned about poor planning, inadequate training of vaccination teams, community resistance including schools, challenges with recruitment and teaming of vaccinators, poor and delayed payments, lack of logistics and motivation for vaccination teams, the timing of the activities and implementing amidst COVID-19 and Ebola. The stakeholders believed that the first round was not well planned and implemented, while the second round was leveraged in their previous experiences. On the other hand, some positive experiences were noted with regard to communication, advocacy and mobilization, vaccine delivery and distribution, district readiness assessments, and cold chain management. Conclusion: This study identified many challenges that were faced in the implementation of the door-to-door mass campaign for nOPV2 in Uganda. This study identified that more needs to be done to improve door-to-door mass campaigns with a focus on motivating the implementers. These findings highlight the need for conducting performance reviews, improved planning, especially routine updates and verification of target populations and training in microplanning, and adequate mapping of community resistance to inform the implementation of future mass campaigns.Keywords: mass polio vaccination campaigns, door-to-door strategy, stakeholders' perceptions, implementation challenges
Procedia PDF Downloads 704667 Sea of Light: A Game 'Based Approach for Evidence-Centered Assessment of Collaborative Problem Solving
Authors: Svenja Pieritz, Jakab Pilaszanovich
Abstract:
Collaborative Problem Solving (CPS) is recognized as being one of the most important skills of the 21st century with having a potential impact on education, job selection, and collaborative systems design. Therefore, CPS has been adopted in several standardized tests, including the Programme for International Student Assessment (PISA) in 2015. A significant challenge of evaluating CPS is the underlying interplay of cognitive and social skills, which requires a more holistic assessment. However, the majority of the existing tests are using a questionnaire-based assessment, which oversimplifies this interplay and undermines ecological validity. Two major difficulties were identified: Firstly, the creation of a controllable, real-time environment allowing natural behaviors and communication between at least two people. Secondly, the development of an appropriate method to collect and synthesize both cognitive and social metrics of collaboration. This paper proposes a more holistic and automated approach to the assessment of CPS. To address these two difficulties, a multiplayer problem-solving game called Sea of Light was developed: An environment allowing students to deploy a variety of measurable collaborative strategies. This controlled environment enables researchers to monitor behavior through the analysis of game actions and chat. The according solution for the statistical model is a combined approach of Natural Language Processing (NLP) and Bayesian network analysis. Social exchanges via the in-game chat are analyzed through NLP and fed into the Bayesian network along with other game actions. This Bayesian network synthesizes evidence to track and update different subdimensions of CPS. Major findings focus on the correlations between the evidences collected through in- game actions, the participants’ chat features and the CPS self- evaluation metrics. These results give an indication of which game mechanics can best describe CPS evaluation. Overall, Sea of Light gives test administrators control over different problem-solving scenarios and difficulties while keeping the student engaged. It enables a more complete assessment based on complex, socio-cognitive information on actions and communication. This tool permits further investigations of the effects of group constellations and personality in collaborative problem-solving.Keywords: bayesian network, collaborative problem solving, game-based assessment, natural language processing
Procedia PDF Downloads 1324666 An Evaluation of the Lae City Road Network Improvement Project
Authors: Murray Matarab Konzang
Abstract:
Lae Port Development Project, Four Lane Highway and other development in the extraction industry which have direct road link to Lae City are predicted to have significant impact on its road network system. This paper evaluates Lae roads improvement program with forecast on planning, economic and the installation of bypasses to ease congestion, effective and convenient transport service for bulk goods and reduce travel time. Land-use transportation study and plans for local area traffic management scheme will be considered. City roads are faced with increased number of traffic and some inadequate road pavement width, poor transport plans, and facilities to meet this transportation demand. Lae also has drainage system which might not hold a 100 year flood. Proper evaluation, plan, design and intersection analysis is needed to evaluate road network system thus recommend improvement and estimate future growth. Repetitive and cyclic loading by heavy commercial vehicles with different axle configurations apply on the flexible pavement which weakens and tear the pavement surface thus small cracks occur. Rain water seeps through and overtime it creates potholes. Effective planning starts from experimental research and appropriate design standards to enable firm embankment, proper drains and quality pavement material. This paper will address traffic problems as well as road pavement, capacities of intersections, and pedestrian flow during peak hours. The outcome of this research will be to identify heavily trafficked road sections and recommend treatments to reduce traffic congestions, road classification, and proposal for bypass routes and improvement. First part of this study will describe transport or traffic related problems within the city. Second part would be to identify challenges imposed by traffic and road related problems and thirdly to recommend solutions after the analyzing traffic data that will indicate current capacities of road intersections and finally recommended treatment for improvement and future growth.Keywords: Lae, road network, highway, vehicle traffic, planning
Procedia PDF Downloads 3584665 Heavy Metal Concentration in Orchard Area, Amphawa District, Samut Songkram Province, Thailand
Authors: Sisuwan Kaseamsawat, Sivapan Choo-In
Abstract:
A study was conducted in May to July 2013 with the aim of determination of heavy metal concentration in orchard area. 60 samples were collected and analyzed for Cadmium (Cd), Copper (Cu), Lead (Pb), and Zinc (Zn) by Atomic Absorption Spectrophotometer (AAS). The heavy metal concentrations in sediment of orchards, that use chemical for Cd (1.13 ± 0.26 mg/l), Cu (8.00 ± 1.05 mg/l), Pb (13.16 ± 2.01) and Zn (37.41 ± 3.20 mg/l). The heavy metal concentrations in sediment of the orchards, that do not use chemical for Cd (1.28 ± 0.50 mg/l), Cu (7.60 ± 1.20 mg/l), Pb (29.87 ± 4.88) and Zn (21.79 ± 2.98 mg/l). Statistical analysis between heavy metal in sediment from the orchard, that use chemical and the orchard, that not use chemical were difference statistic significant of 0.5 level of significant for Cd and Pb while no statistically difference for Cu and Zn.Keywords: heavy metal, orchard, pollution and monitoring, sediment
Procedia PDF Downloads 3854664 Decision Support System for Fetus Status Evaluation Using Cardiotocograms
Authors: Oyebade K. Oyedotun
Abstract:
The cardiotocogram is a technical recording of the heartbeat rate and uterine contractions of a fetus during pregnancy. During pregnancy, several complications can occur to both the mother and the fetus; hence it is very crucial that medical experts are able to find technical means to check the healthiness of the mother and especially the fetus. It is very important that the fetus develops as expected in stages during the pregnancy period; however, the task of monitoring the health status of the fetus is not that which is easily achieved as the fetus is not wholly physically available to medical experts for inspection. Hence, doctors have to resort to some other tests that can give an indication of the status of the fetus. One of such diagnostic test is to obtain cardiotocograms of the fetus. From the analysis of the cardiotocograms, medical experts can determine the status of the fetus, and therefore necessary medical interventions. Generally, medical experts classify examined cardiotocograms into ‘normal’, ‘suspect’, or ‘pathological’. This work presents an artificial neural network based decision support system which can filter cardiotocograms data, producing the corresponding statuses of the fetuses. The capability of artificial neural network to explore the cardiotocogram data and learn features that distinguish one class from the others has been exploited in this research. In this research, feedforward and radial basis neural networks were trained on a publicly available database to classify the processed cardiotocogram data into one of the three classes: ‘normal’, ‘suspect’, or ‘pathological’. Classification accuracies of 87.8% and 89.2% were achieved during the test phase of the trained network for the feedforward and radial basis neural networks respectively. It is the hope that while the system described in this work may not be a complete replacement for a medical expert in fetus status evaluation, it can significantly reinforce the confidence in medical diagnosis reached by experts.Keywords: decision support, cardiotocogram, classification, neural networks
Procedia PDF Downloads 3324663 Optimal Design of the Power Generation Network in California: Moving towards 100% Renewable Electricity by 2045
Authors: Wennan Long, Yuhao Nie, Yunan Li, Adam Brandt
Abstract:
To fight against climate change, California government issued the Senate Bill No. 100 (SB-100) in 2018 September, which aims at achieving a target of 100% renewable electricity by the end of 2045. A capacity expansion problem is solved in this case study using a binary quadratic programming model. The optimal locations and capacities of the potential renewable power plants (i.e., solar, wind, biomass, geothermal and hydropower), the phase-out schedule of existing fossil-based (nature gas) power plants and the transmission of electricity across the entire network are determined with the minimal total annualized cost measured by net present value (NPV). The results show that the renewable electricity contribution could increase to 85.9% by 2030 and reach 100% by 2035. Fossil-based power plants will be totally phased out around 2035 and solar and wind will finally become the most dominant renewable energy resource in California electricity mix.Keywords: 100% renewable electricity, California, capacity expansion, mixed integer non-linear programming
Procedia PDF Downloads 1714662 Historical Hashtags: An Investigation of the #CometLanding Tweets
Authors: Noor Farizah Ibrahim, Christopher Durugbo
Abstract:
This study aims to investigate how the Twittersphere reacted during the recent historical event of robotic landing on a comet. The news is about Philae, a robotic lander from European Space Agency (ESA), which successfully made the first-ever rendezvous and touchdown of its kind on a nucleus comet on November 12, 2014. In order to understand how Twitter is practically used in spreading messages on historical events, we conducted an analysis of one-week tweet feeds that contain the #CometLanding hashtag. We studied the trends of tweets, the diffusion of the information and the characteristics of the social network created. The results indicated that the use of Twitter as a platform enables online communities to engage and spread the historical event through social media network (e.g. tweets, retweets, mentions and replies). In addition, it was found that comprehensible and understandable hashtags could influence users to follow the same tweet stream compared to other laborious hashtags which were difficult to understand by users in online communities.Keywords: diffusion of information, hashtag, social media, Twitter
Procedia PDF Downloads 3254661 Contribution to the Success of the Energy Audit in the Industrial Environment: A Case Study about Audit of Interior Lighting for an Industrial Site in Morocco
Authors: Abdelkarim Ait Brik, Abdelaziz Khoukh, Mustapha Jammali, Hamid Chaikhy
Abstract:
The energy audit is the essential initial step to ensure a good definition of energy control actions. The in-depth study of the various energy-consuming equipments makes it possible to determine the actions and investments with best cost for the company. The analysis focuses on the energy consumption of production equipment and utilities (lighting, heating, air conditioning, ventilation, transport). Successful implementation of this approach requires, however, to take into account a number of prerequisites. This paper proposes a number of useful recommendations concerning the energy audit in order to achieve better results, and a case study concerning the lighting audit of a Moroccan company by showing the gains that can be made through this audit.Keywords: energy audit, energy diagnosis, consumption, electricity, energy efficiency, lighting audit
Procedia PDF Downloads 6954660 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks
Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang
Abstract:
The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation
Procedia PDF Downloads 1564659 A Learning Automata Based Clustering Approach for Underwater Sensor Networks to Reduce Energy Consumption
Authors: Motahareh Fadaei
Abstract:
Wireless sensor networks that are used to monitor a special environment, are formed from a large number of sensor nodes. The role of these sensors is to sense special parameters from ambient and to make connection. In these networks, the most important challenge is the management of energy usage. Clustering is one of the methods that are broadly used to face this challenge. In this paper, a distributed clustering protocol based on learning automata is proposed for underwater wireless sensor networks. The proposed algorithm that is called LA-Clustering forms clusters in the same energy level, based on the energy level of nodes and the connection radius regardless of size and the structure of sensor network. The proposed approach is simulated and is compared with some other protocols with considering some metrics such as network lifetime, number of alive nodes, and number of transmitted data. The simulation results demonstrate the efficiency of the proposed approach.Keywords: clustering, energy consumption, learning automata, underwater sensor networks
Procedia PDF Downloads 314