Search results for: panel data method
35872 Numerical Simulation of Two-Dimensional Flow over a Stationary Circular Cylinder Using Feedback Forcing Scheme Based Immersed Boundary Finite Volume Method
Authors: Ranjith Maniyeri, Ahamed C. Saleel
Abstract:
Two-dimensional fluid flow over a stationary circular cylinder is one of the bench mark problem in the field of fluid-structure interaction in computational fluid dynamics (CFD). Motivated by this, in the present work, a two-dimensional computational model is developed using an improved version of immersed boundary method which combines the feedback forcing scheme of the virtual boundary method with Peskin’s regularized delta function approach. Lagrangian coordinates are used to represent the cylinder and Eulerian coordinates are used to describe the fluid flow. A two-dimensional Dirac delta function is used to transfer the quantities between the sold to fluid domain. Further, continuity and momentum equations governing the fluid flow are solved using fractional step based finite volume method on a staggered Cartesian grid system. The developed code is validated by comparing the values of drag coefficient obtained for different Reynolds numbers with that of other researcher’s results. Also, through numerical simulations for different Reynolds numbers flow behavior is well captured. The stability analysis of the improved version of immersed boundary method is tested for different values of feedback forcing coefficients.Keywords: Feedback Forcing Scheme, Finite Volume Method, Immersed Boundary Method, Navier-Stokes Equations
Procedia PDF Downloads 30535871 Generic Data Warehousing for Consumer Electronics Retail Industry
Authors: S. Habte, K. Ouazzane, P. Patel, S. Patel
Abstract:
The dynamic and highly competitive nature of the consumer electronics retail industry means that businesses in this industry are experiencing different decision making challenges in relation to pricing, inventory control, consumer satisfaction and product offerings. To overcome the challenges facing retailers and create opportunities, we propose a generic data warehousing solution which can be applied to a wide range of consumer electronics retailers with a minimum configuration. The solution includes a dimensional data model, a template SQL script, a high level architectural descriptions, ETL tool developed using C#, a set of APIs, and data access tools. It has been successfully applied by ASK Outlets Ltd UK resulting in improved productivity and enhanced sales growth.Keywords: consumer electronics, data warehousing, dimensional data model, generic, retail industry
Procedia PDF Downloads 41335870 Parameter Estimation for Contact Tracing in Graph-Based Models
Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar
Abstract:
We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference
Procedia PDF Downloads 7835869 Opportunities for Precision Feed in Apiculture
Authors: John Michael Russo
Abstract:
Honeybees are important to our food system and continue to suffer from high rates of colony loss. Precision feed has brought many benefits to livestock cultivation and these should transfer to apiculture. However, apiculture has unique challenges. The objective of this research is to understand how principles of precision agriculture, applied to apiculture and feed specifically, might effectively improve state-of-the-art cultivation. The methodology surveys apicultural practice to build a model for assessment. First, a review of apicultural motivators is made. Feed method is then evaluated. Finally, precision feed methods are examined as accelerants with potential to advance the effectiveness of feed practice. Six important motivators emerge: colony loss, disease, climate change, site variance, operational costs, and competition. Feed practice itself is used to compensate for environmental variables. The research finds that the current state-of-the-art in apiculture feed focuses on critical challenges in the management of feed schedules which satisfy requirements of the bees, preserve potency, optimize environmental variables, and manage costs. Many of the challenges are most acute when feed is used to dispense medication. Technology such as RNA treatments have even more rigorous demands. Precision feed solutions focus on strategies which accommodate specific needs of individual livestock. A major component is data; they integrate precise data with methods that respond to individual needs. There is enormous opportunity for precision feed to improve apiculture through the integration of precision data with policies to translate data into optimized action in the apiary, particularly through automation.Keywords: precision agriculture, precision feed, apiculture, honeybees
Procedia PDF Downloads 7835868 Sequential Data Assimilation with High-Frequency (HF) Radar Surface Current
Authors: Lei Ren, Michael Hartnett, Stephen Nash
Abstract:
The abundant measured surface current from HF radar system in coastal area is assimilated into model to improve the modeling forecasting ability. A simple sequential data assimilation scheme, Direct Insertion (DI), is applied to update model forecast states. The influence of Direct Insertion data assimilation over time is analyzed at one reference point. Vector maps of surface current from models are compared with HF radar measurements. Root-Mean-Squared-Error (RMSE) between modeling results and HF radar measurements is calculated during the last four days with no data assimilation.Keywords: data assimilation, CODAR, HF radar, surface current, direct insertion
Procedia PDF Downloads 57435867 Evaluation of MPPT Algorithms for Photovoltaic Generator by Comparing Incremental Conductance Method, Perturbation and Observation Method and the Method Using Fuzzy Logic
Authors: Elmahdi Elgharbaoui, Tamou Nasser, Ahmed Essadki
Abstract:
In the era of sustainable development, photovoltaic (PV) technology has shown significant potential as a renewable energy source. Photovoltaic generators (GPV) have a non-linear current-voltage characteristic, with a maximum power point (MPP) characterized by an optimal voltage, and depends on environmental factors such as temperature and irradiation. To extract each time the maximum power available at the terminals of the GPV and transfer it to the load, an adaptation stage is used, consisting of a boost chopper controlled by a maximum power point tracking technique (MPPT) through a stage of pulse width modulation (PWM). Our choice has focused on three techniques which are: the perturbation and observation method (P&O), the incremental conductance method (InCond) and the last is that of control using the fuzzy logic. The implementation and simulation of the system (photovoltaic generator, chopper boost, PWM and MPPT techniques) are then performed in the Matlab/Simulink environment.Keywords: photovoltaic generator, technique MPPT, boost chopper, PWM, fuzzy logic, P&O, InCond
Procedia PDF Downloads 32335866 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements
Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono
Abstract:
The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement
Procedia PDF Downloads 28135865 Measured versus Default Interstate Traffic Data in New Mexico, USA
Authors: M. A. Hasan, M. R. Islam, R. A. Tarefder
Abstract:
This study investigates how the site specific traffic data differs from the Mechanistic Empirical Pavement Design Software default values. Two Weigh-in-Motion (WIM) stations were installed in Interstate-40 (I-40) and Interstate-25 (I-25) to developed site specific data. A computer program named WIM Data Analysis Software (WIMDAS) was developed using Microsoft C-Sharp (.Net) for quality checking and processing of raw WIM data. A complete year data from November 2013 to October 2014 was analyzed using the developed WIM Data Analysis Program. After that, the vehicle class distribution, directional distribution, lane distribution, monthly adjustment factor, hourly distribution, axle load spectra, average number of axle per vehicle, axle spacing, lateral wander distribution, and wheelbase distribution were calculated. Then a comparative study was done between measured data and AASHTOWare default values. It was found that the measured general traffic inputs for I-40 and I-25 significantly differ from the default values.Keywords: AASHTOWare, traffic, weigh-in-motion, axle load distribution
Procedia PDF Downloads 34335864 An Assessment of Different Blade Tip Timing (BTT) Algorithms Using an Experimentally Validated Finite Element Model Simulator
Authors: Mohamed Mohamed, Philip Bonello, Peter Russhard
Abstract:
Blade Tip Timing (BTT) is a technology concerned with the estimation of both frequency and amplitude of rotating blades. A BTT system comprises two main parts: (a) the arrival time measurement system, and (b) the analysis algorithms. Simulators play an important role in the development of the analysis algorithms since they generate blade tip displacement data from the simulated blade vibration under controlled conditions. This enables an assessment of the performance of the different algorithms with respect to their ability to accurately reproduce the original simulated vibration. Such an assessment is usually not possible with real engine data since there is no practical alternative to BTT for blade vibration measurement. Most simulators used in the literature are based on a simple spring-mass-damper model to determine the vibration. In this work, a more realistic experimentally validated simulator based on the Finite Element (FE) model of a bladed disc (blisk) is first presented. It is then used to generate the necessary data for the assessment of different BTT algorithms. The FE modelling is validated using both a hammer test and two firewire cameras for the mode shapes. A number of autoregressive methods, fitting methods and state-of-the-art inverse methods (i.e. Russhard) are compared. All methods are compared with respect to both synchronous and asynchronous excitations with both single and simultaneous frequencies. The study assesses the applicability of each method for different conditions of vibration, amount of sampling data, and testing facilities, according to its performance and efficiency under these conditions.Keywords: blade tip timing, blisk, finite element, vibration measurement
Procedia PDF Downloads 31135863 Influence of Maternal Factors on Growth Patterns of Schoolchildren in a Rural Health and Demographic Surveillance Site in South Africa: A Mixed Method Study
Authors: Perpetua Modjadji, Sphiwe Madiba
Abstract:
Background: The growth patterns of children are good nutritional indicators of their nutritional status, health, and socioeconomic level. However, the maternal factors and the belief system of the society affect the growth of children promoting undernutrition. This study determined the influence of maternal factors on growth patterns of schoolchildren in a rural site. Methods: A convergent mixed method study was conducted among 508 schoolchildren and their mothers in Dikgale Health and Demographic Surveillance System Site, South Africa. Multistage sampling was used to select schools (purposive) and learners (random), who were paired with their mothers. Anthropometry was measured and socio-demographic, obstetrical, household information, maternal influence on children’s nutrition, and growth were assessed using an interviewer administered questionnaire (quantitative). The influence of the cultural beliefs and practices of mothers on the nutrition and growth of their children was explored using focus group discussions (qualitative). Narratives of mothers were used to best understand growth patterns of schoolchildren (mixed method). Data were analyzed using STATA 14 (quantitative) and Nvivo 11 (qualitative). Quantitative and qualitative data were merged for integrated mixed method analysis using a joint display analysis. Results: Mean age of children was 10 ± 2 years, ranging from 6 to 15 years. Substantial percentages of thinness (25%), underweight (24%), and stunting (22%) were observed among the children. Mothers had a mean age of 37 ± 7 years, and 75% were overweight or obese. A depressed socio-economic status indicated by a higher rate of unemployment with no income (82.3%), and dependency on social grants (86.8%) was observed. Determinants of poor growth patterns were child’s age and gender, maternal age, height and BMI, access to water supply, and refrigerator use. The narratives of mothers suggested that the children in most of their households were exposed to poverty and the inadequate intake of quality food. Conclusion: Poor growth patterns were observed among schoolchildren while their mothers were overweight or obese. Child’s gender, school grade, maternal body mass index, and access to water were the main determinants. Congruence was observed between most qualitative themes and quantitative constructs. A need for a multi sectoral approach considering an evidence based and feasible nutrition programs for schoolchildren, especially those in rural settings and educating mothers, cannot be over-emphasized.Keywords: growth patterns, maternal factors, rural context, schoolchildren, South Africa
Procedia PDF Downloads 18135862 Traffic Light Detection Using Image Segmentation
Authors: Vaishnavi Shivde, Shrishti Sinha, Trapti Mishra
Abstract:
Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture).Keywords: traffic light detection, image segmentation, machine learning, classification, convolutional neural networks
Procedia PDF Downloads 17435861 Design of Knowledge Management System with Geographic Information System
Authors: Angga Hidayah Ramadhan, Luciana Andrawina, M. Azani Hasibuan
Abstract:
Data will be as a core of the decision if it has a good treatment or process, which is process that data into information, and information into knowledge to make a wisdom or decision. Today, many companies have not realize it include XYZ University Admission Directorate as executor of National Admission called Seleksi Masuk Bersama (SMB) that during the time, the workers only uses their feeling to make a decision. Whereas if it done, then that company can analyze the data to make a right decision to get a pin sales from student candidate or registrant that follow SMB as many as possible. Therefore, needs Knowledge Management System (KMS) with Geographic Information System (GIS) use 5C4C that can process that company data becomes more useful and can help make decisions. This information system can process data into information based on the pin sold data with 5C (Contextualized, Categorize, Calculation, Correction, Condensed) and convert information into knowledge with 4C (Comparing, Consequence, Connection, Conversation) that has been several steps until these data can be useful to make easier to take a decision or wisdom, resolve problems, communicate, and quicker to learn to the employees have not experience and also for ease of viewing/visualization based on spatial data that equipped with GIS functionality that can be used to indicate events in each province with indicator that facilitate in this system. The system also have a function to save the tacit on the system then to be proceed into explicit in expert system based on the problems that will be found from the consequences of information. With the system each team can make a decision with same ways, structured, and the important is based on the actual event/data.Keywords: 5C4C, data, information, knowledge
Procedia PDF Downloads 46235860 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers
Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta
Abstract:
The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation
Procedia PDF Downloads 6235859 Event Extraction, Analysis, and Event Linking
Authors: Anam Alam, Rahim Jamaluddin Kanji
Abstract:
With the rapid growth of event in everywhere, event extraction has now become an important matter to retrieve the information from the unstructured data. One of the challenging problems is to extract the event from it. An event is an observable occurrence of interaction among entities. The paper investigates the effectiveness of event extraction capabilities of three software tools that are Wandora, Nitro and SPSS. We performed standard text mining techniques of these tools on the data sets of (i) Afghan War Diaries (AWD collection), (ii) MUC4 and (iii) WebKB. Information retrieval measures such as precision and recall which are computed under extensive set of experiments for Event Extraction. The experimental study analyzes the difference between events extracted by the software and human. This approach helps to construct an algorithm that will be applied for different machine learning methods.Keywords: event extraction, Wandora, nitro, SPSS, event analysis, extraction method, AFG, Afghan War Diaries, MUC4, 4 universities, dataset, algorithm, precision, recall, evaluation
Procedia PDF Downloads 59635858 Ancient Malay and Spice Trade Routes: A Study of Ancient Malay from the Perspectives of Linguistics and Archaeology
Authors: Totok Suhardijanto, Ninie Susanti Tedjowasono
Abstract:
This paper discusses the relationship between the distribution of Ancient Malay inscriptions and Spice Trade Route, especially in the relation with material cultures that accompany them, to understand how Malay could spread out around the archipelago beyond its original native-speakers’ region. The archipelago was known as the Spice Islands from the very beginning of the first century due to mace, cloves, and nutmeg that were originally exclusively found there. According to the Indian record, since the 2nd century, there has been a contact established between Indian and Indonesian people. A Chinese document from 3rd Century has mentioned Wangka (now widely known as Bangka) an island near Sumatra where some Chinese expeditions had visited. All of these records supported the existence of a maritime trade system and route between the archipelago and other countries during the first millennium. This paper will discuss first the Ancient Malay inscription spread around the archipelago from the perspectives of language variation and writing system style. Analyzing language variations of inscriptions certainly is not as easy as studying current spoken language variations in modern sociolinguistics. A huge amount of data is available for such kind of studies. On the contrary, in language variation research with inscription texts as an object, data is insufficient. Other resources will be needed to support the linguistic analysis. For this reason, this research made use of epigraphical evidence in the surrounding areas of the inscriptions to explain the variation of language and writing style. The research next expands the analysis to figure out the relationship between language variation and inscriptions distribution to the Spice Trade Route that spreads from the Molucca Sea to Mediterranian Sea. Data in this research consists of six different inscriptions: Kedukan Bukit, Koto Kapur, Dapunta Salendra, Sang Hyang Wintang, Ligor, and Laguna from the 7th-9th Century and found in Sumatra, Jawa, and the Philippines. In addition, as a comparative resource, this research also used Hikayat Tanjung Tanah, the first-founded Ancient Malay manuscript. In language analysis, we conduct a sociolinguistic method to explore the language variation and writing style of the inscriptions. For dealing with archaeological data, we conducted a hermeneutic method to analyze the possible meaning and social uses of the data. Language variations and writing system style in this research can be classified into two main groups. The language, epigraphical, and archaeological evidence explain that Ancient Malay had been widely used in the Eastern area of Spice Trade Route because it played an important role in the region as a lingua franca between people from different ethnic groups with different languages.Keywords: Ancient Malay, Spice trade route, language variation, writing system variation
Procedia PDF Downloads 19335857 Detecting Model Financial Statement Fraud by Auditor Industry Specialization with Fraud Triangle Analysis
Authors: Reskino Resky
Abstract:
This research purposes to create a model to detecting financial statement fraud. This research examines the variable of fraud triangle and auditor industry specialization with financial statement fraud. This research used sample of company which is listed in Indonesian Stock Exchange that have sanctions and cases by Financial Services Authority in 2011-2013. The number of company that were became in this research were 30 fraud company and 30 non-fraud company. The method of determining the sample is by using purposive sampling method with judgement sampling, while the data processing methods used by researcher are mann-whitney u and discriminants analysis. This research have two from five variable that can be process with discriminant analysis. The result shows the financial targets can be detect financial statement fraud, while financial stability can’t be detect financial statement fraud.Keywords: fraud triangle analysis, financial targets, financial stability, auditor industry specialization, financial statement fraud
Procedia PDF Downloads 45735856 Series Solutions to Boundary Value Differential Equations
Authors: Armin Ardekani, Mohammad Akbari
Abstract:
We present a method of generating series solutions to large classes of nonlinear differential equations. The method is well suited to be adapted in mathematical software and unlike the available commercial solvers, we are capable of generating solutions to boundary value ODEs and PDEs. Many of the generated solutions converge to closed form solutions. Our method can also be applied to systems of ODEs or PDEs, providing all the solutions efficiently. As examples, we present results to many difficult differential equations in engineering fields.Keywords: computational mathematics, differential equations, engineering, series
Procedia PDF Downloads 33635855 Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images
Authors: Emhimed Saffor
Abstract:
In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications.Keywords: CT images, Matlab, medical images, edge detection
Procedia PDF Downloads 33835854 Seismic Response of Structure Using a Three Degree of Freedom Shake Table
Authors: Ketan N. Bajad, Manisha V. Waghmare
Abstract:
Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed
Procedia PDF Downloads 14035853 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method
Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan
Abstract:
Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.Keywords: Bangladesh, end of life, e-waste, material flow analysis
Procedia PDF Downloads 19935852 Assessing Image Quality in Mobile Radiography: A Phantom-Based Evaluation of a New Lightweight Mobile X-Ray Equipment
Authors: May Bazzi, Shafik Tokmaj, Younes Saberi, Mats Geijer, Tony Jurkiewicz, Patrik Sund, Anna Bjällmark
Abstract:
Mobile radiography, employing portable X-ray equipment, has become a routine procedure within hospital settings, with chest X-rays in intensive care units standing out as the most prevalent mobile X-ray examinations. This approach is not limited to hospitals alone, as it extends its benefits to imaging patients in various settings, particularly those too frail to be transported, such as elderly care residents in nursing homes. Moreover, the utility of mobile X-ray isn't confined solely to traditional healthcare recipients; it has proven to be a valuable resource for vulnerable populations, including the homeless, drug users, asylum seekers, and patients with multiple co-morbidities. Mobile X-rays reduce patient stress, minimize costly hospitalizations, and offer cost-effective imaging. While studies confirm its reliability, further research is needed, especially regarding image quality. Recent advancements in lightweight equipment with enhanced battery and detector technology provide the potential for nearly handheld radiography. The main aim of this study was to evaluate a new lightweight mobile X-ray system with two different detectors and compare the image quality with a modern stationary system. Methods: A total of 74 images of the chest (chest anterior-posterior (AP) views and chest lateral views) and pelvic/hip region (AP pelvis views, hip AP views, and hip cross-table lateral views) were acquired on a whole-body phantom (Kyotokagaku, Japan), utilizing varying image parameters. These images were obtained using a stationary system - 18 images (Mediel, Sweden), a mobile X-ray system with a second-generation detector - 28 images (FDR D-EVO II; Fujifilm, Japan) and a mobile X-ray system with a third-generation detector - 28 images (FDR D-EVO III; Fujifilm, Japan). Image quality was assessed by visual grading analysis (VGA), which is a method to measure image quality by assessing the visibility and accurate reproduction of anatomical structures within the images. A total of 33 image criteria were used in the analysis. A panel of two experienced radiologists, two experienced radiographers, and two final-term radiographer students evaluated the image quality on a 5-grade ordinal scale using the software Viewdex 3.0 (Viewer for Digital Evaluation of X-ray images, Sweden). Data were analyzed using visual grading characteristics analysis. The dose was measured by the dose-area product (DAP) reported by the respective systems. Results: The mobile X-ray equipment (both detectors) showed significantly better image quality than the stationary equipment for the pelvis, hip AP and hip cross-table lateral images with AUCVGA-values ranging from 0.64-0.92, while chest images showed mixed results. The number of images rated as having sufficient quality for diagnostic use was significantly higher for mobile X-ray generation 2 and 3 compared with the stationary X-ray system. The DAP values were higher for the stationary compared to the mobile system. Conclusions: The new lightweight radiographic equipment had an image quality at least as good as a fixed system at a lower radiation dose. Future studies should focus on clinical images and consider radiographers' viewpoints for a comprehensive assessment.Keywords: mobile x-ray, visual grading analysis, radiographer, radiation dose
Procedia PDF Downloads 6635851 Thermoluminescence Study of Cu Doped Lithium Tetra Borate Samples Synthesized by Water/Solution Assisted Method
Authors: Swarnapriya Thiyagarajan, Modesto Antonio Sosa Aquino, Miguel Vallejo Hernandez, Senthilkumar Kalaiselvan Dhivyaraj, Jayaramakrishnan Velusamy
Abstract:
In this paper the lithium tetra borate (Li2B4O7) was prepared by used water/solution assisted synthesis method. Once finished the synthesization, Copper (Cu) were used to doping material with Li2B4O7 in order to enhance its thermo luminescent properties. The heating temperature parameters were 750°C for 2 hr and 150°C for 2hr. The samples produced by water assisted method were doped at different doping percentage (0.02%, 0.04%, 0.06%, 0.08%, 0.12%, 0.5%, 0.1%, and 1%) of Cu.The characteristics and identification of Li2B4O7 (undoped and doped) were determined in four tests. They are X-ray diffraction (XRD), Scanning electron microscope (SEM), Photoluminescence (PL), Ultra violet visible spectroscopy (UV Vis). As it is evidence from the XRD and SEM results the obtained Li2B4O7 and Li2B4O7 doping with Cu was confirmed and also confirmed the chemical compositition and their morphologies. The obtained lithium tetraborate XRD pattern result was verified with the reference data of lithium tetraborate with tetragonal structure from JCPDS. The glow curves of Li2B4O7 and Li2B4O7 : Cu were obtained by thermo luminescence (TLD) reader (Harshaw 3500). The pellets were irradiated with different kind of dose (58mGy, 100mGy, 500mGy, and 945mGy) by using an X-ray source. Finally this energy response was also compared with TLD100. The order of kinetics (b), frequency factor (S) and activation energy (E) or the trapping parameters were calculated using peak shape method. Especially Li2B4O7: Cu (0.1%) presents good glow curve in all kind of doses. The experimental results showed that this Li2B4O7: Cu could have good potential applications in radiation dosimetry. The main purpose of this paper is to determine the effect of synthesis on the TL properties of doped lithium tetra borate Li2B4O7.Keywords: dosimetry, irradiation, lithium tetraborate, thermoluminescence
Procedia PDF Downloads 27735850 A Policy Strategy for Building Energy Data Management in India
Authors: Shravani Itkelwar, Deepak Tewari, Bhaskar Natarajan
Abstract:
The energy consumption data plays a vital role in energy efficiency policy design, implementation, and impact assessment. Any demand-side energy management intervention's success relies on the availability of accurate, comprehensive, granular, and up-to-date data on energy consumption. The Building sector, including residential and commercial, is one of the largest consumers of energy in India after the Industrial sector. With economic growth and increasing urbanization, the building sector is projected to grow at an unprecedented rate, resulting in a 5.6 times escalation in energy consumption till 2047 compared to 2017. Therefore, energy efficiency interventions will play a vital role in decoupling the floor area growth and associated energy demand, thereby increasing the need for robust data. In India, multiple institutions are involved in the collection and dissemination of data. This paper focuses on energy consumption data management in the building sector in India for both residential and commercial segments. It evaluates the robustness of data available through administrative and survey routes to estimate the key performance indicators and identify critical data gaps for making informed decisions. The paper explores several issues in the data, such as lack of comprehensiveness, non-availability of disaggregated data, the discrepancy in different data sources, inconsistent building categorization, and others. The identified data gaps are justified with appropriate examples. Moreover, the paper prioritizes required data in order of relevance to policymaking and groups it into "available," "easy to get," and "hard to get" categories. The paper concludes with recommendations to address the data gaps by leveraging digital initiatives, strengthening institutional capacity, institutionalizing exclusive building energy surveys, and standardization of building categorization, among others, to strengthen the management of building sector energy consumption data.Keywords: energy data, energy policy, energy efficiency, buildings
Procedia PDF Downloads 18535849 Microwave-Assisted Synthesis of RuO2-TiO2 Electrodes with Improved Chlorine and Oxygen Evolutions
Authors: Tran Le Luu, Jeyong Yoon
Abstract:
RuO2-TiO2 electrode now becomes popular in the chlor-alkali industry because of high electrocatalytic and stability with chlorine and oxygen evolutions. Using alternative green method for preparation RuO2-TiO2 electrode is necessary to reduce the cost, time. In addition, it is needed to increase the electrocatalyst performance, stability, and environmental compatibility. In this study, the Ti/RuO2-TiO2 electrodes were synthesized using sol-gel method under microwave irradiation and investigated for the anodic chlorine and oxygen evolutions. This method produced small size and uniform distribution of RuO2-TiO2 nanoparticles with mean diameter of 8-10 nm on the big crack size surface which contributes for the increasing of the outer active surface area. The chlorine, oxygen evolution efficiency and stability comparisons show considerably higher for microwave-assisted coated electrodes than for those obtained by the conventional heating method. The microwave-assisted sol-gel route has been identified as a novel and powerful method for quick synthesis of RuO2–TiO2 electrodes with excellent chlorine and oxygen evolution performances.Keywords: RuO2, electro-catalyst, sol-gel, microwave, chlorine, oxygen evolution
Procedia PDF Downloads 25435848 Determination of Suction of Arid Region Soil Using Filter Paper Method
Authors: Bhavita S. Dave, Chandresh H. Solanki, Atul K. Desai
Abstract:
Soils of Greater Himalayas mostly pertain to Leh & Ladakh, Lahaul & Sppiti, & high reaches to Uttarakhand. The moisture regime is aridic. The arid zone starts from Baralacha pass in Lahaul and covers the entire Spiti valley in the district of Lahaul & Spiti, Himachal Pradesh of India. Here, the present study is an attempt to determine the suction value of soil collected from the arid zone of Spiti valley for different freezing-thawing cycles considering the climate ranges of Spiti valley. Suction is the basic and most important parameter which influences the behavior of unsaturated soil. It is essential to determine the suction value of unsaturated soil before other tests like shear test, and permeability. Basically, it is the negative pore water pressure in partially saturated soil measured in terms of the height of the water column. The filter paper method has been used for the study as an economical approach to evaluate suction. It is the only method from which both contact and non-contact suction can be deduced. In this study, soil specimens were subjected to 0, 1, 3, & 5 freezing-thawing (F-T) cycles for different degrees of saturation to have a wide range of suction, and soil freezing characteristic curves (SFCC) were formulated for all F-T cycles. The result data collected from the experiments have shown best-fitted values using Fredlund & Xing model for each SFCC.Keywords: suction, arid region soil, soil freezing characteristic curve, freezing-thawing cycle
Procedia PDF Downloads 22835847 Cubic Trigonometric B-Spline Approach to Numerical Solution of Wave Equation
Authors: Shazalina Mat Zin, Ahmad Abd. Majid, Ahmad Izani Md. Ismail, Muhammad Abbas
Abstract:
The generalized wave equation models various problems in sciences and engineering. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline for the approximate solution of wave equation is developed. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Von Neumann stability analysis is used to analyze the proposed method. Two problems are discussed to exhibit the feasibility and capability of the method. The absolute errors and maximum error are computed to assess the performance of the proposed method. The results were found to be in good agreement with known solutions and with existing schemes in literature.Keywords: collocation method, cubic trigonometric B-spline, finite difference, wave equation
Procedia PDF Downloads 54235846 Detecting Heartbeat Architectural Tactic in Source Code Using Program Analysis
Authors: Ananta Kumar Das, Sujit Kumar Chakrabarti
Abstract:
Architectural tactics such as heartbeat, ping-echo, encapsulate, encrypt data are techniques that are used to achieve quality attributes of a system. Detecting architectural tactics has several benefits: it can aid system comprehension (e.g., legacy systems) and in the estimation of quality attributes such as safety, security, maintainability, etc. Architectural tactics are typically spread over the source code and are implicit. For large codebases, manual detection is often not feasible. Therefore, there is a need for automated methods of detection of architectural tactics. This paper presents a formalization of the heartbeat architectural tactic and a program analytic approach to detect this tactic in source code. The experiment of the proposed method is done on a set of Java applications. The outcome of the experiment strongly suggests that the method compares well with a manual approach in terms of its sensitivity and specificity, and far supersedes a manual exercise in terms of its scalability.Keywords: software architecture, architectural tactics, detecting architectural tactics, program analysis, AST, alias analysis
Procedia PDF Downloads 16035845 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm
Procedia PDF Downloads 16535844 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: data locality, data-centric computing, large scale infrastructures, cloud computing
Procedia PDF Downloads 25935843 Research on Construction of Subject Knowledge Base Based on Literature Knowledge Extraction
Authors: Yumeng Ma, Fang Wang, Jinxia Huang
Abstract:
Researchers put forward higher requirements for efficient acquisition and utilization of domain knowledge in the big data era. As literature is an effective way for researchers to quickly and accurately understand the research situation in their field, the knowledge discovery based on literature has become a new research method. As a tool to organize and manage knowledge in a specific domain, the subject knowledge base can be used to mine and present the knowledge behind the literature to meet the users' personalized needs. This study designs the construction route of the subject knowledge base for specific research problems. Information extraction method based on knowledge engineering is adopted. Firstly, the subject knowledge model is built through the abstraction of the research elements. Then under the guidance of the knowledge model, extraction rules of knowledge points are compiled to analyze, extract and correlate entities, relations, and attributes in literature. Finally, a database platform based on this structured knowledge is developed that can provide a variety of services such as knowledge retrieval, knowledge browsing, knowledge q&a, and visualization correlation. Taking the construction practices in the field of activating blood circulation and removing stasis as an example, this study analyzes how to construct subject knowledge base based on literature knowledge extraction. As the system functional test shows, this subject knowledge base can realize the expected service scenarios such as a quick query of knowledge, related discovery of knowledge and literature, knowledge organization. As this study enables subject knowledge base to help researchers locate and acquire deep domain knowledge quickly and accurately, it provides a transformation mode of knowledge resource construction and personalized precision knowledge services in the data-intensive research environment.Keywords: knowledge model, literature knowledge extraction, precision knowledge services, subject knowledge base
Procedia PDF Downloads 163