Search results for: climatological weather data measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26527

Search results for: climatological weather data measurement

25027 A Comparison between Five Indices of Overweight and Their Association with Myocardial Infarction and Death, 28-Year Follow-Up of 1000 Middle-Aged Swedish Employed Men

Authors: Lennart Dimberg, Lala Joulha Ian

Abstract:

Introduction: Overweight (BMI 25-30) and obesity (BMI 30+) have consistently been associated with cardiovascular (CV) risk and death since the Framingham heart study in 1948, and BMI was included in the original Framingham risk score (FRS). Background: Myocardial infarction (MI) poses a serious threat to the patient's life. In addition to BMI, several other indices of overweight have been presented and argued to replace FRS as more relevant measures of CV risk. These indices include waist circumference (WC), waist/hip ratio (WHR), sagittal abdominal diameter (SAD), and sagittal abdominal diameter to height (SADHtR). Specific research question: The research question of this study is to evaluate the interrelationship between the various body measurements, BMI, WC, WHR, SAD, and SADHtR, and which measurement is strongly associated with MI and death. Methods: In 1993, 1,000 middle-aged Caucasian, randomly selected working men of the Swedish Volvo-Renault cohort were surveyed at a nurse-led health examination with a questionnaire, EKG, laboratory tests, blood pressure, height, weight, waist, and sagittal abdominal diameter measurements. Outcome data of myocardial infarction over 28 years come from Swedeheart (the Swedish national myocardial infarction registry) and the Swedish death registry. The Aalen-Johansen and Kaplan–Meier methods were used to estimate the cumulative incidences of MI and death. Multiple logistic regression analyses were conducted to compare BMI with the other four body measurements. The risk for the various measures of obesity was calculated with outcomes of accumulated first-time myocardial infarction and death as odds ratios (OR) in quartiles. The ORs between the 4th and the 1st quartile of each measure were calculated to estimate the association between the body measurement variables and the probability of cumulative incidences of myocardial infarction (MI) over time. Double-sided P values below 0.05 will be considered statistically significant. Unadjusted odds ratios were calculated for obesity indicators, MI, and death. Adjustments for age, diabetes, SBP, and the ratio of total cholesterol/HDL-C and blue/white collar status were performed. Results: Out of 1000 people, 959 subjects had full information about the five different body measurements. Of those, 90 participants had a first MI, and 194 persons died. The study showed that there was a high and significant correlation between the five different body measurements, and they were all associated with CVD risk factors. All body measurements were significantly associated with MI, with the highest (OR=3.6) seen for SADHtR and WC. After adjustment, all but SADHtR remained significant with weaker ORs. As for all-cause mortality, WHR (OR=1.7), SAD (OR=1.9), and SADHtR (OR=1.6) were significantly associated, but not WC and BMI. However, after adjustment, only WHR and SAD were significantly associated with death, but with attenuated ORs.

Keywords: BMI, death, epidemiology, myocardial infarction, risk factor, sagittal abdominal diameter, sagittal abdominal diameter to height, waist circumference, waist-hip ratio

Procedia PDF Downloads 75
25026 Data Recording for Remote Monitoring of Autonomous Vehicles

Authors: Rong-Terng Juang

Abstract:

Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.

Keywords: autonomous vehicle, data compression, remote monitoring, controller area networks (CAN), Lidar

Procedia PDF Downloads 147
25025 Modelling Flood Events in Botswana (Palapye) for Protecting Roads Structure against Floods

Authors: Thabo M. Bafitlhile, Adewole Oladele

Abstract:

Botswana has been affected by floods since long ago and is still experiencing this tragic event. Flooding occurs mostly in the North-West, North-East, and parts of Central district due to heavy rainfalls experienced in these areas. The torrential rains destroyed homes, roads, flooded dams, fields and destroyed livestock and livelihoods. Palapye is one area in the central district that has been experiencing floods ever since 1995 when its greatest flood on record occurred. Heavy storms result in floods and inundation; this has been exacerbated by poor and absence of drainage structures. Since floods are a part of nature, they have existed and will to continue to exist, hence more destruction. Furthermore floods and highway plays major role in erosion and destruction of roads structures. Already today, many culverts, trenches, and other drainage facilities lack the capacity to deal with current frequency for extreme flows. Future changes in the pattern of hydro climatic events will have implications for the design and maintenance costs of roads. Increase in rainfall and severe weather events can affect the demand for emergent responses. Therefore flood forecasting and warning is a prerequisite for successful mitigation of flood damage. In flood prone areas like Palapye, preventive measures should be taken to reduce possible adverse effects of floods on the environment including road structures. Therefore this paper attempts to estimate return periods associated with huge storms of different magnitude from recorded historical rainfall depth using statistical method. The method of annual maxima was used to select data sets for the rainfall analysis. In the statistical method, the Type 1 extreme value (Gumbel), Log Normal, Log Pearson 3 distributions were all applied to the annual maximum series for Palapye area to produce IDF curves. The Kolmogorov-Smirnov test and Chi Squared were used to confirm the appropriateness of fitted distributions for the location and the data do fit the distributions used to predict expected frequencies. This will be a beneficial tool for urgent flood forecasting and water resource administration as proper drainage design will be design based on the estimated flood events and will help to reclaim and protect the road structures from adverse impacts of flood.

Keywords: drainage, estimate, evaluation, floods, flood forecasting

Procedia PDF Downloads 355
25024 Structural Parameter Identification of Old Steel Truss Bridges

Authors: A. Bogdanovic, M. Vitanova, J. Bojadjieva, Z. Rakicevic, V. Sesov, K. Edip, N. Naumovski, F. Manojlovski, A.Popovska, A. Shoklarovski, T. Kitanovski, D. Ivanovski, I. Markovski, D. Filipovski

Abstract:

The conditions of existing structures change in the course of time and can hardly be characterized particularly if a bridge has long been in function and there is no design documentation related to it. To define the real conditions of a structure, detailed static and dynamic analysis of the structure has to be carried out and its modal parameters have to be defined accurately. Modal analysis enables a quite accurate identification of the natural frequencies and mode shapes. Presented in this paper are the results from the performed detailed analyses of a steel truss bridge that has been in use for more than 7 decades by the military services of R.N. Macedonia and for which there is no documentation at all. Static and dynamic investigations and ambient vibration measurements were performed. The acquired data were used to identify the mode shapes that were used for comparison with the numerical model. Dynamic tests were performed to define the bridge behaviour and the damping index. Finally, based on all the conducted detailed analyses and investigations, conclusions on the conditions of the bridge structure were drawn.

Keywords: ambient vibrations, dynamic identification, in-situ measurement, steel truss bridge

Procedia PDF Downloads 74
25023 Analysis of Grid Connected High Concentrated Photovoltaic Systems for Peak Load Shaving in Kuwait

Authors: Adel A. Ghoneim

Abstract:

Air conditioning devices are substantially utilized in the summer months, as a result maximum loads in Kuwait take place in these intervals. Peak energy consumption are usually more expensive to satisfy compared to other standard power sources. The primary objective of the current work is to enhance the performance of high concentrated photovoltaic (HCPV) systems in an attempt to minimize peak power usage in Kuwait using HCPV modules. High concentrated PV multi-junction solar cells provide a promising method towards accomplishing lowest pricing per kilowatt-hour. Nevertheless, these cells have various features that should be resolved to be feasible for extensive power production. A single diode equivalent circuit model is formulated to analyze multi-junction solar cells efficiency in Kuwait weather circumstances taking into account the effects of both the temperature and the concentration ratio. The diode shunt resistance that is commonly ignored in the established models is considered in the present numerical model. The current model results are successfully validated versus measurements from published data to within 1.8% accuracy. Present calculations reveal that the single diode model considering the shunt resistance provides accurate and dependable results. The electrical efficiency (η) is observed to increase with concentration to a specific concentration level after which it reduces. Implementing grid systems is noticed to increase with concentration to a certain concentration degree after which it decreases. Employing grid connected HCPV systems results in significant peak load reduction.

Keywords: grid connected, high concentrated photovoltaic systems, peak load, solar cells

Procedia PDF Downloads 146
25022 New Stratigraphy Profile of Classic Nihewan Basin Beds, Hebei, Northern China

Authors: Arya Farjand

Abstract:

The Nihewan Basin is a critical region in order to understand the Plio-Pleistocene paleoenvironment and its fauna in Northern China. The rich fossiliferous, fluvial-lacustrine sediments around the Nihewan Village hosted the specimens known as the Classic Nihewan Fauna. The primary excavations in the early 1920-30s produced more than 2000 specimens housed in Tianjin and Paris Museum. Nevertheless, the exact locality of excavations, fossil beds, and the reliable ages remained ambiguous until recent paleomagnetic studies and extensive work in conjunction sites. In this study, for the first time, we successfully relocated some of the original excavation sites. We reexamined more than 1500 specimens held in Tianjin Museum and cited their locality numbers and properties. During the field-season of 2017-2019, we visited the Xiashagou Valley. By reading the descriptions of the original site, utilization of satellite pictures, and comparing them with the current geomorphology of the area, we ensured the exact location of 26 of these sites and 17 fossil layers. Furthermore, by applying the latest technologies, such as GPS, Compass, digital barometers, laser measurer, and Abney level, we ensured the accuracy of the measurement. We surveyed 133-meter thickness of the deposits. Ultimately by applying the available Paleomagnetic data for this section, we estimated the ages of different horizons. The combination of our new data and previously published researches present a unique age control for the Classic Nihewan Fauna. These findings prove the hypothesis in which the Classic Nihewan Fauna belongs to different horizons, ranging from before Reunion up to after Olduvai earth geomagnetic field excursion (2.2-1.7 Mya).

Keywords: classic Nihewan basin fauna, Olduvai excursion, Pleistocene, stratigraphy

Procedia PDF Downloads 122
25021 A Visual Inspection System for Automotive Sheet Metal Chasis Parts Produced with Cold-Forming Method

Authors: İmren Öztürk Yılmaz, Abdullah Yasin Bilici, Yasin Atalay Candemir

Abstract:

The system consists of 4 main elements: motion system, image acquisition system, image processing software, and control interface. The parts coming out of the production line to enter the image processing system with the conveyor belt at the end of the line. The 3D scanning of the produced part is performed with the laser scanning system integrated into the system entry side. With the 3D scanning method, it is determined at what position and angle the parts enter the system, and according to the data obtained, parameters such as part origin and conveyor speed are calculated with the designed software, and the robot is informed about the position where it will take part. The robot, which receives the information, takes the produced part on the belt conveyor and shows it to high-resolution cameras for quality control. Measurement processes are carried out with a maximum error of 20 microns determined by the experiments.

Keywords: quality control, industry 4.0, image processing, automated fault detection, digital visual inspection

Procedia PDF Downloads 91
25020 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: data fusion, Dempster-Shafer theory, data mining, event detection

Procedia PDF Downloads 393
25019 Craving Intensity Measurements in Opiate Addicts to Objectify the Opioid Substitution Therapy Dose and Reduce the Relapse Risk

Authors: Igna Brajevic-Gizdic, Magda Pletikosa Pavić

Abstract:

Introduction: Research in opiate addiction is increasingly indicating the importance of substitution therapy in opiate addicts. Opiate addiction is a chronic relapsing disease that includes craving as a criterion. Craving has been considered a predictor of a relapse, which is defined as a strong desire with an excessive need to take a substance. The study aimed to measure the intensity of craving using the VAS (visual analog scale) in opioid addicts taking Opioid Substitution Therapy (OST). Method: The total sample compromised of 30 participants in outpatient treatment. Two groups of opiate addicts were considered: Methadone-maintenance and buprenorphine-maintenance addicts. The participants completed the survey questionnaire during the outpatient treatment. Results: The results indicated high levels of cravings in patients during the treatment of OST, which is considered an important destabilization factor in abstinence. Thus, the use of methadone/buprenorphine dose should be considered. Conclusion: These findings provided an objective measurement of methadone /buprenorphine dosage and therapy options. The underdoes of OST can put patients at high risk of relapse, resulting in high levels of craving. Thus, when determining the therapeutic dose of OST, it is crucial to consider patients' cravings. This would achieve stabilization more quickly and avoid relapse in abstinence. Subjective physician assessment and patients' statement are the main criteria to determine OST dosage. Future studies should use larger sample sizes and focus on the importance of intensity craving measurement on OST to objectify methadone /buprenorphine dosage.

Keywords: buprenorphine, craving, heroin addicts, methadone, OST

Procedia PDF Downloads 72
25018 Legal Issues of Collecting and Processing Big Health Data in the Light of European Regulation 679/2016

Authors: Ioannis Iglezakis, Theodoros D. Trokanas, Panagiota Kiortsi

Abstract:

This paper aims to explore major legal issues arising from the collection and processing of Health Big Data in the light of the new European secondary legislation for the protection of personal data of natural persons, placing emphasis on the General Data Protection Regulation 679/2016. Whether Big Health Data can be characterised as ‘personal data’ or not is really the crux of the matter. The legal ambiguity is compounded by the fact that, even though the processing of Big Health Data is premised on the de-identification of the data subject, the possibility of a combination of Big Health Data with other data circulating freely on the web or from other data files cannot be excluded. Another key point is that the application of some provisions of GPDR to Big Health Data may both absolve the data controller of his legal obligations and deprive the data subject of his rights (e.g., the right to be informed), ultimately undermining the fundamental right to the protection of personal data of natural persons. Moreover, data subject’s rights (e.g., the right not to be subject to a decision based solely on automated processing) are heavily impacted by the use of AI, algorithms, and technologies that reclaim health data for further use, resulting in sometimes ambiguous results that have a substantial impact on individuals. On the other hand, as the COVID-19 pandemic has revealed, Big Data analytics can offer crucial sources of information. In this respect, this paper identifies and systematises the legal provisions concerned, offering interpretative solutions that tackle dangers concerning data subject’s rights while embracing the opportunities that Big Health Data has to offer. In addition, particular attention is attached to the scope of ‘consent’ as a legal basis in the collection and processing of Big Health Data, as the application of data analytics in Big Health Data signals the construction of new data and subject’s profiles. Finally, the paper addresses the knotty problem of role assignment (i.e., distinguishing between controller and processor/joint controllers and joint processors) in an era of extensive Big Health data sharing. The findings are the fruit of a current research project conducted by a three-member research team at the Faculty of Law of the Aristotle University of Thessaloniki and funded by the Greek Ministry of Education and Religious Affairs.

Keywords: big health data, data subject rights, GDPR, pandemic

Procedia PDF Downloads 115
25017 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems

Authors: Yong-Kyu Jung

Abstract:

The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).

Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity

Procedia PDF Downloads 65
25016 The Culex Pipiens Niche: Assessment with Climatic and Physiographic Variables via a Geographic Information System

Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, João Casaca

Abstract:

Using a geographic information system (GIS), the relations between a georeferenced data set of Culex pipiens sl. mosquitoes collected in Portugal mainland during seven years (2006-2012) and meteorological and physiographic parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures), daily total rainfall, altitude, land use/land cover and proximity to water bodies are evaluated. Focus is on the mosquito females; the characterization of its habitat is the key for the planning of chirurgical non-aggressive prophylactic countermeasures to avoid ambient degradation. The GIS allow for the spatial determination of the zones were the mosquito mean captures has been above average; using the meteorological values at these coordinates, the limits of each parameter are identified/computed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the thresholds obtained for each parameter. The intersection of the maps obtained for each month show the evolution of the area favorable to the species through the mosquito season, which is from May to October at these latitudes. In parallel, mean and above average captures were related to the physiographic parameters. Three levels of risk could be identified for each parameter, using above average captures as an index. The results were applied to the suitability meteorological maps of each month. The Culex pipiens critical niche is delimited, reflecting the critical areas and the level of risk for transmission of the pathogens to which they are competent vectors (West Nile virus, iridoviruses, rheoviruses and parvoviruses).

Keywords: Culex pipiens, ecological niche, risk assessment, risk management

Procedia PDF Downloads 522
25015 Inadequacy of Macronutrient and Micronutrient Intake in Children Aged 12-23 Months Old: An Urban Study in Central Jakarta, Indonesia

Authors: Dewi Fatmaningrum, Ade Wiradnyani

Abstract:

Background: Optimal feeding, include optimal micronutrient intake, becomes one of the ways to overcome the long-term consequences of undernutrition. Macronutrient and micronutrient intake were important for rapid growth and development of the children. Objectives: To assess macro and micronutrient intake of children aged 12-23 months old and nutrients inadequacy from intake of children aged 12-23 months old. Methods: This survey was a cross-sectional study, simple random sampling was performed to select respondents. Total sample of this study was 83 children aged 12-23 months old in Paseban Village, Senen Sub-district, Central Jakarta. The data was collected via interview and hemoglobin measurement of children. Results: The highest prevalence of inadequacy was iron intake (52.4%) compared to other micronutrients, 11.98% children had inadequate energy intake. There were 62.6% anemic children in the study area in which divided into anemic (37.3%) and severe anemic (25.3%). Conclusion: Micronutrient inadequacy occurred more frequently than macronutrient inadequacy in the study area. The higher the percentage of iron inadequacy gets, the higher the percentage of anemia among children is observed.

Keywords: micronutrient, macronutrient, children under five, urban setting

Procedia PDF Downloads 319
25014 Potential Climate Change Impacts on the Hydrological System of the Harvey River Catchment

Authors: Hashim Isam Jameel Al-Safi, P. Ranjan Sarukkalige

Abstract:

Climate change is likely to impact the Australian continent by changing the trends of rainfall, increasing temperature, and affecting the accessibility of water quantity and quality. This study investigates the possible impacts of future climate change on the hydrological system of the Harvey River catchment in Western Australia by using the conceptual modelling approach (HBV mode). Daily observations of rainfall and temperature and the long-term monthly mean potential evapotranspiration, from six weather stations, were available for the period (1961-2015). The observed streamflow data at Clifton Park gauging station for 33 years (1983-2015) in line with the observed climate variables were used to run, calibrate and validate the HBV-model prior to the simulation process. The calibrated model was then forced with the downscaled future climate signals from a multi-model ensemble of fifteen GCMs of the CMIP3 model under three emission scenarios (A2, A1B and B1) to simulate the future runoff at the catchment outlet. Two periods were selected to represent the future climate conditions including the mid (2046-2065) and late (2080-2099) of the 21st century. A control run, with the reference climate period (1981-2000), was used to represent the current climate status. The modelling outcomes show an evident reduction in the mean annual streamflow during the mid of this century particularly for the A1B scenario relative to the control run. Toward the end of the century, all scenarios show a relatively high reduction trends in the mean annual streamflow, especially the A1B scenario, compared to the control run. The decline in the mean annual streamflow ranged between 4-15% during the mid of the current century and 9-42% by the end of the century.

Keywords: climate change impact, Harvey catchment, HBV model, hydrological modelling, GCMs, LARS-WG

Procedia PDF Downloads 244
25013 Dust and Soling Accumulation Effect on Photovoltaic Systems in Middle East and North Africa Region

Authors: Iyad Muslih, Azzah Alkhalailah, Ali Merdji

Abstract:

Photovoltaic efficiency is highly affected by dust accumulation; the dust particles prevent direct solar radiation from reaching the panel surface; therefore a reduction in output power will occur. A study of dust and soiling accumulation effect on the output power of PV panels was conducted for different periods of time from May to October in three countries of the MENA region, Jordan, Egypt, and Algeria, under local weather conditions. This study leads to build a more realistic equation to estimate the power reduction as a function of time. This logarithmic function shows the high reduction in power in the first days with 10% reduction in output power compared to the reference system, where it reaches a steady state value after 60 days to reach a maximum value of 30%.

Keywords: solar energy, PV system, soiling, MENA

Procedia PDF Downloads 205
25012 Wellbore Spiraling Induced through Systematic Micro-Sliding

Authors: Christopher Viens, Bosko Gajic, Steve Krase

Abstract:

Stick-Slip is a term that is often overused and commonly diagnosed from surface drilling parameters of torque and differential pressure, but the actual magnitude of the condition is rarely captured at the BHA level as the necessary measurements are seldom deployed. Deployment of an accurate stick-slip measurement downhole has led to an interesting discovery that goes against long held traditional drilling lore. A divide has been identified between stick-slip as independent bit and BHA conditions. This phenomenon in horizontal laterals is common, but few M/LWD systems have been able to capture it. Utilizing measurements of downhole RPM bore pressure, high-speed magnetometer data, bending moment, and continuous inclination, the wellbore spiraling phenomenon is able to be captured, quantified, and intimately tied back to systematic effects of BHA stalling and micro-sliding. An operator in the Permian Basin has identified that this phenomenon is contributing to increased tortuosity and drag. Utilizing downhole torque measurements the root causes of the stick-slip and spiraling phenomenon were identified and able to engineered out of the system.

Keywords: bending moment, downhole dynamics measurements, micro sliding, wellbore spiraling

Procedia PDF Downloads 232
25011 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 339
25010 Design and Analysis of a Combined Cooling, Heating and Power Plant for Maximum Operational Flexibility

Authors: Salah Hosseini, Hadi Ramezani, Bagher Shahbazi, Hossein Rabiei, Jafar Hooshmand, Hiwa Khaldi

Abstract:

Diversity of energy portfolio and fluctuation of urban energy demand establish the need for more operational flexibility of combined Cooling, Heat, and Power Plants. Currently, the most common way to achieve these specifications is the use of heat storage devices or wet operation of gas turbines. The current work addresses using variable extraction steam turbine in conjugation with a gas turbine inlet cooling system as an alternative way for enhancement of a CCHP cycle operating range. A thermodynamic model is developed and typical apartments building in PARDIS Technology Park (located at Tehran Province) is chosen as a case study. Due to the variable Heat demand and using excess chiller capacity for turbine inlet cooling purpose, the mentioned steam turbine and TIAC system provided an opportunity for flexible operation of the cycle and boosted the independence of the power and heat generation in the CCHP plant. It was found that the ratio of power to the heat of CCHP cycle varies from 12.6 to 2.4 depending on the City heating and cooling demands and ambient condition, which means a good independence between power and heat generation. Furthermore, selection of the TIAC design temperature is done based on the amount of ratio of power gain to TIAC coil surface area, it was found that for current cycle arrangement the TIAC design temperature of 15 C is most economical. All analysis is done based on the real data, gathered from the local weather station of the PARDIS site.

Keywords: CCHP plant, GTG, HRSG, STG, TIAC, operational flexibility, power to heat ratio

Procedia PDF Downloads 268
25009 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength

Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma

Abstract:

The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.

Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.

Procedia PDF Downloads 47
25008 Correlation Results Based on Magnetic Susceptibility Measurements by in-situ and Ex-Situ Measurements as Indicators of Environmental Changes Due to the Fertilizer Industry

Authors: Nurin Amalina Widityani, Adinda Syifa Azhari, Twin Aji Kusumagiani, Eleonora Agustine

Abstract:

Fertilizer industry activities contribute to environmental changes. Changes to the environment became one of a few problems in this era of globalization. Parameters that can be seen as criteria to identify changes in the environment can be seen from the aspects of physics, chemistry, and biology. One aspect that can be assessed quickly and efficiently to describe environmental change is the aspect of physics, one of which is the value of magnetic susceptibility (χ). The rock magnetism method can be used as a proxy indicator of environmental changes, seen from the value of magnetic susceptibility. The rock magnetism method is based on magnetic susceptibility studies to measure and classify the degree of pollutant elements that cause changes in the environment. This research was conducted in the area around the fertilizer plant, with five coring points on each track, each coring point a depth of 15 cm. Magnetic susceptibility measurements were performed by in-situ and ex-situ. In-situ measurements were carried out directly by using the SM30 tool by putting the tools on the soil surface at each measurement point and by that obtaining the value of the magnetic susceptibility. Meanwhile, ex-situ measurements are performed in the laboratory by using the Bartington MS2B tool’s susceptibility, which is done on a coring sample which is taken every 5 cm. In-situ measurement shows results that the value of magnetic susceptibility at the surface varies, with the lowest score on the second and fifth points with the -0.81 value and the highest value at the third point, with the score of 0,345. Ex-situ measurements can find out the variations of magnetic susceptibility values at each depth point of coring. At a depth of 0-5 cm, the value of the highest XLF = 494.8 (x10-8m³/kg) is at the third point, while the value of the lowest XLF = 187.1 (x10-8m³/kg) at first. At a depth of 6-10 cm, the highest value of the XLF was at the second point, which was 832.7 (x10-8m³/kg) while the lowest XLF is at the first point, at 211 (x10-8m³/kg). At a depth of 11-15 cm, the XLF’s highest value = 857.7 (x10-8m³/kg) is at the second point, whereas the value of the lowest XLF = 83.3 (x10-8m³/kg) is at the fifth point. Based on the in situ and exsit measurements, it can be seen that the highest magnetic susceptibility values from the surface samples are at the third point.

Keywords: magnetic susceptibility, fertilizer plant, Bartington MS2B, SM30

Procedia PDF Downloads 328
25007 Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data

Authors: Sašo Pečnik, Borut Žalik

Abstract:

This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders.

Keywords: filtering, graphics, level-of-details, LiDAR, real-time visualization

Procedia PDF Downloads 290
25006 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 140
25005 Surface Thermodynamics Approach to Mycobacterium tuberculosis (M-TB) – Human Sputum Interactions

Authors: J. L. Chukwuneke, C. H. Achebe, S. N. Omenyi

Abstract:

This research work presents the surface thermodynamics approach to M-TB/HIV-Human sputum interactions. This involved the use of the Hamaker coefficient concept as a surface energetics tool in determining the interaction processes, with the surface interfacial energies explained using van der Waals concept of particle interactions. The Lifshitz derivation for van der Waals forces was applied as an alternative to the contact angle approach which has been widely used in other biological systems. The methodology involved taking sputum samples from twenty infected persons and from twenty uninfected persons for absorbance measurement using a digital Ultraviolet visible Spectrophotometer. The variables required for the computations with the Lifshitz formula were derived from the absorbance data. The Matlab software tools were used in the mathematical analysis of the data produced from the experiments (absorbance values). The Hamaker constants and the combined Hamaker coefficients were obtained using the values of the dielectric constant together with the Lifshitz equation. The absolute combined Hamaker coefficients A132abs and A131abs on both infected and uninfected sputum samples gave the values of A132abs = 0.21631x10-21Joule for M-TB infected sputum and Ã132abs = 0.18825x10-21Joule for M-TB/HIV infected sputum. The significance of this result is the positive value of the absolute combined Hamaker coefficient which suggests the existence of net positive van der waals forces demonstrating an attraction between the bacteria and the macrophage. This however, implies that infection can occur. It was also shown that in the presence of HIV, the interaction energy is reduced by 13% conforming adverse effects observed in HIV patients suffering from tuberculosis.

Keywords: absorbance, dielectric constant, hamaker coefficient, lifshitz formula, macrophage, mycobacterium tuberculosis, van der waals forces

Procedia PDF Downloads 255
25004 International Financial Reporting Standards and the Quality of Banks Financial Statement Information: Evidence from an Emerging Market-Nigeria

Authors: Ugbede Onalo, Mohd Lizam, Ahmad Kaseri, Otache Innocent

Abstract:

Giving the paucity of studies on IFRS adoption and quality of banks accounting quality, particularly in emerging economies, this study is motivated to investigate whether the Nigeria decision to adopt IFRS beginning from 1 January 2012 is associated with high quality accounting measures. Consistent with prior literatures, this study measure quality of financial statement information using earnings measurement, timeliness of loss recognition and value relevance. A total of twenty Nigeria banks covering a period of six years (2008-2013) divided equally into three years each (2008, 2009, 2010) pre adoption period and (2011, 2012, 2013) post adoption period were investigated. Following prior studies eight models were in all employed to investigate earnings management, timeliness of loss recognition and value relevance of Nigeria bank accounting quality for the different reporting regimes. Results suggest that IFRS adoption is associated with minimal earnings management, timely recognition of losses and high value relevance of accounting information. Summarily, IFRS adoption engenders higher quality of banks financial statement information compared to local GAAP. Hence, this study recommends the global adoption of IFRS and that Nigeria banks should embrace good corporate governance practices.

Keywords: IFRS, SAS, quality of accounting information, earnings measurement, discretionary accruals, non-discretionary accruals, total accruals, Jones model, timeliness of loss recognition, value relevance

Procedia PDF Downloads 451
25003 Elaboration and Validation of a Survey about Research on the Characteristics of Mentoring of University Professors’ Lifelong Learning

Authors: Nagore Guerra Bilbao, Clemente Lobato Fraile

Abstract:

This paper outlines the design and development of the MENDEPRO questionnaire, designed to analyze mentoring performance within a professional development process carried out with professors at the University of the Basque Country, Spain. The study took into account the international research carried out over the past two decades into teachers' professional development, and was also based on a thorough review of the most common instruments used to identify and analyze mentoring styles, many of which fail to provide sufficient psychometric guarantees. The present study aimed to gather empirical data in order to verify the metric quality of the questionnaire developed. To this end, the process followed to validate the theoretical construct was as follows: The formulation of the items and indicators in accordance with the study variables; the analysis of the validity and reliability of the initial questionnaire; the review of the second version of the questionnaire and the definitive measurement instrument. Content was validated through the formal agreement and consensus of 12 university professor training experts. A reduced sample of professors who had participated in a lifelong learning program was then selected for a trial evaluation of the instrument developed. After the trial, 18 items were removed from the initial questionnaire. The final version of the instrument, comprising 33 items, was then administered to a sample group of 99 participants. The results revealed a five-dimensional structure matching theoretical expectations. Also, the reliability data for both the instrument as a whole (.98) and its various dimensions (between .91 and .97) were very high. The questionnaire was thus found to have satisfactory psychometric properties and can therefore be considered apt for studying the performance of mentoring in both induction programs for young professors and lifelong learning programs for senior faculty members.

Keywords: higher education, mentoring, professional development, university teaching

Procedia PDF Downloads 168
25002 Characterization of Fine Particles Emitted by the Inland and Maritime Shipping

Authors: Malika Souada, Juanita Rausch, Benjamin Guinot, Christine Bugajny

Abstract:

The increase of global commerce and tourism makes the shipping sector an important contributor of atmospheric pollution. Both, airborne particles and gaseous pollutants have negative impact on health and climate. This is especially the case in port cities, due to the proximity of the exposed population to the shipping emissions in addition to other multiple sources of pollution linked to the surrounding urban activity. The objective of this study is to determine the concentrations of fine particles (immission), specifically PM2.5, PM1, PM0.3, BC and sulphates, in a context where maritime passenger traffic plays an important role (port area of Bordeaux centre). The methodology is based on high temporal resolution measurements of pollutants, correlated with meteorological and ship movements data. Particles and gaseous pollutants from seven maritime passenger ships were sampled and analysed during the docking, manoeuvring and berthing phases. The particle mass measurements were supplemented by measurements of the number concentration of ultrafine particles (<300 nm diameter). The different measurement points were chosen by taking into account the local meteorological conditions and by pre-modelling the dispersion of the smoke plumes. The results of the measurement campaign carried out during the summer of 2021 in the port of Bordeaux show that the detection of concentrations of particles emitted by ships proved to be punctual and stealthy. Punctual peaks of ultrafine particle concentration in number (P#/m3) and BC (ng/m3) were measured during the docking phases of the ships, but the concentrations returned to their background level within minutes. However, it appears that the influence of the docking phases does not significantly affect the air quality of Bordeaux centre in terms of mass concentration. Additionally, no clear differences in PM2.5 concentrations between the periods with and without ships at berth were observed. The urban background pollution seems to be mainly dominated by exhaust and non-exhaust road traffic emissions. However, temporal high-resolution measurements suggest a probable emission of gaseous precursors responsible for the formation of secondary aerosols related to the ship activities. This was evidenced by the high values of the PM1/BC and PN/BC ratios, tracers of non-primary particle formation, during periods of ship berthing vs. periods without ships at berth. The research findings from this study provide robust support for port area air quality assessment and source apportionment.

Keywords: characterization, fine particulate matter, harbour air quality, shipping impacts

Procedia PDF Downloads 88
25001 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement

Authors: Hu Zhenxing, Gao Jianxin

Abstract:

Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.

Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D

Procedia PDF Downloads 483
25000 Estimating Destinations of Bus Passengers Using Smart Card Data

Authors: Hasik Lee, Seung-Young Kho

Abstract:

Nowadays, automatic fare collection (AFC) system is widely used in many countries. However, smart card data from many of cities does not contain alighting information which is necessary to build OD matrices. Therefore, in order to utilize smart card data, destinations of passengers should be estimated. In this paper, kernel density estimation was used to forecast probabilities of alighting stations of bus passengers and applied to smart card data in Seoul, Korea which contains boarding and alighting information. This method was also validated with actual data. In some cases, stochastic method was more accurate than deterministic method. Therefore, it is sufficiently accurate to be used to build OD matrices.

Keywords: destination estimation, Kernel density estimation, smart card data, validation

Procedia PDF Downloads 335
24999 Compact LWIR Borescope Sensor for Thermal Imaging of 2D Surface Temperature in Gas-Turbine Engines

Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandar, Subodh Adhikari, Paul S. Hsu

Abstract:

The durability of a combustor in gas-turbine engines is a strong function of its component temperatures and requires good control of these temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system with optimized flow rates of cooling air is significantly important to elongate the lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate two-dimensional (2D) surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement in this application include the rmocouples, thermal wall paints, pyrometry, and phosphors. They have shown some disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve 2D high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of a combustor in gas-turbine engines and, furthermore, to develop more advanced gas-turbine engines.

Keywords: borescope, engine, low-wave-infrared, sensor

Procedia PDF Downloads 108
24998 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 109