Search results for: 3D measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2617

Search results for: 3D measurement

2107 Modeling Route Selection Using Real-Time Information and GPS Data

Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento

Abstract:

Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.

Keywords: behavior choice model, human factors, hybrid model, real time data

Procedia PDF Downloads 126
2106 Shariah Guideline on Value-Based Intermediation Implementation in the Light of Maqasid Shariah Analysis

Authors: Muhammad Izzam Bin Mohd Khazar, Ruqayyah Binti Mohamad Ali, Nurul Atiqah Binti Yusri

Abstract:

Value-based intermediation (VBI) has been introduced by Bank Negara Malaysia (BNM) as the next strategic direction and growth driver for Islamic banking institutions. The aim of VBI is to deliver the intended outcome of Shariah through practices, conducts, and offerings that generate positive and sustainable impact to the economy, community and environment which is aligned to Maqasid Shariah in preserving the common interest of society by preventing harm and maximizing benefit. Hence, upon its implementation, VBI will experiment the current Shariah compliance treatment and revolutionise new policies and systems that can meritoriously entrench and convey the objectives of Shariah. However, discussion revolving VBI in the light of Maqasid analysis is still scarce hence further research needs to be undertaken. The idea of implementation of VBI vision into quantifiable Maqasid Shariah measurement is yet to be explored due to the nature of Maqasid that is variable. The contemporary scholars also have different views on the implementation of VBI. This paper aims to discuss on the importance of Maqasid Shariah in the current Islamic finance transactions by providing Shariah index measurement in the application of VBI. This study also intends to explore basic Shariah guidelines and parameters based on the objectives of Shariah; preservation of the five pillars (religion, life, progeny, intellect and wealth) with further elaboration on preservation of wealth under five headings: rawaj (circulation and marketability); wuduh (transparency); hifz (preservation); thabat (durability and tranquillity); and ‘adl (equity and justice). In alignment with these headings, Islamic finance can be innovated for VBI implementation, particularly in Maybank Islamic being a significant leader in the IFI market.

Keywords: Islamic Financial Institutions, Maqasid Index, Maqasid Shariah, sustainability, value-based intermediation

Procedia PDF Downloads 146
2105 Dosimetric Comparison among Different Head and Neck Radiotherapy Techniques Using PRESAGE™ Dosimeter

Authors: Jalil ur Rehman, Ramesh C. Tailor, Muhammad Isa Khan, Jahnzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

Purpose: The purpose of this analysis was to investigate dose distribution of different techniques (3D-CRT, IMRT and VMAT) of head and neck cancer using 3-dimensional dosimeter called PRESAGETM Dosimeter. Materials and Methods: Computer tomography (CT) scans of radiological physics center (RPC) head and neck anthropomorphic phantom with both RPC standard insert and PRESAGETM insert were acquired separated with Philipp’s CT scanner and both CT scans were exported via DICOM to the Pinnacle version 9.4 treatment planning system (TPS). Each plan was delivered twice to the RPC phantom first containing the RPC standard insert having TLD and film dosimeters and then again containing the Presage insert having 3-D dosimeter (PRESAGETM) by using a Varian True Beam linear accelerator. After irradiation, the standard insert including point dose measurements (TLD) and planar Gafchromic® EBT film measurement were read using RPC standard procedure. The 3D dose distribution from PRESAGETM was read out with the Duke Midsized optical scanner dedicated to RPC (DMOS-RPC). Dose volume histogram (DVH), mean and maximal doses for organs at risk were calculated and compared among each head and neck technique. The prescription dose was same for all head and neck radiotherapy techniques which was 6.60 Gy/friction. Beam profile comparison and gamma analysis were used to quantify agreements among film measurement, PRESAGETM measurement and calculated dose distribution. Quality assurances of all plans were performed by using ArcCHECK method. Results: VMAT delivered the lowest mean and maximum doses to organ at risk (spinal cord, parotid) than IMRT and 3DCRT. Such dose distribution was verified by absolute dose distribution using thermoluminescent dosimeter (TLD) system. The central axial, sagittal and coronal planes were evaluated using 2D gamma map criteria(± 5%/3 mm) and results were 99.82% (axial), 99.78% (sagital), 98.38% (coronal) for VMAT plan and found the agreement between PRESAGE and pinnacle was better than IMRT and 3D-CRT plan excludes a 7 mm rim at the edge of the dosimeter. Profile showed good agreement for all plans between film, PRESAGE and pinnacle and 3D gamma was performed for PTV and OARs, VMAT and 3DCRT endow with better agreement than IMRT. Conclusion: VMAT delivered lowered mean and maximal doses to organs at risk and better PTV coverage during head and neck radiotherapy. TLD, EBT film and PRESAGETM dosimeters suggest that VMAT was better for the treatment of head and neck cancer than IMRT and 3D-CRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD, PRESAGETM

Procedia PDF Downloads 366
2104 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy

Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai

Abstract:

Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.

Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy

Procedia PDF Downloads 122
2103 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence

Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej

Abstract:

In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.

Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction

Procedia PDF Downloads 81
2102 Labor Productivity and Organization Performance in Specialty Trade Construction: The Moderating Effect of Safety

Authors: Shalini Priyadarshini

Abstract:

The notion of performance measurement has held great appeal for the industry and research communities alike. This idea is also true for the construction sector, and some propose that performance measurement and productivity analysis are two separate management functions, where productivity is a subset of performance, the latter requiring comprehensive analysis of comparable factors. Labor productivity is considered one of the best indicators of production efficiency. The construction industry continues to account for a disproportionate share of injuries and illnesses despite adopting several technological and organizational interventions that promote worker safety. Specialty trades contractors typically complete a large fraction of work on any construction project, but insufficient body of work exists that address subcontractor safety and productivity issues. Literature review has revealed the possibility of a relation between productivity, safety and other factors and their links to project, organizational, task and industry performance. This research posits that there is an association between productivity and performance at project as well as organizational levels in the construction industry. Moreover, prior exploration of the importance of safety within the performance-productivity framework has been anecdotal at best. Using structured questionnaire survey and organization- and project level data, this study, which is a combination of cross-sectional and longitudinal research designs, addresses the identified research gap and models the relationship between productivity, safety, and performance with a focus on specialty trades in the construction sector. Statistical analysis is used to establish a correlation between the variables of interest. This research identifies the need for developing and maintaining productivity and safety logs for smaller businesses. Future studies can design and develop research to establish causal relationships between these variables.

Keywords: construction, safety, productivity, performance, specialty trades

Procedia PDF Downloads 255
2101 Application of Electro-Optical Hybrid Cables in Horizontal Well Production Logging

Authors: Daofan Guo, Dong Yang

Abstract:

For decades, well logging with coiled tubing has relied solely on surface data such as pump pressure, wellhead pressure, depth counter, and weight indicator readings. While this data serves the oil industry well, modern smart logging utilizes real-time downhole information, which automatically increases operational efficiency and optimizes intervention qualities. For example, downhole pressure, temperature, and depth measurement data can be transmitted through the electro-optical hybrid cable in the coiled tubing to surface operators on a real-time base. This paper mainly introduces the unique structural features and various applications of the electro-optical hybrid cables which were deployed into downhole with the help of coiled tubing technology. Fiber optic elements in the cable enable optical communications and distributed measurements, such as distributed temperature and acoustic sensing. The electrical elements provide continuous surface power for downhole tools, eliminating the limitations of traditional batteries, such as temperature, operating time, and safety concerns. The electrical elements also enable cable telemetry operation of cable tools. Both power supply and signal transmission were integrated into an electro-optical hybrid cable, and the downhole information can be captured by downhole electrical sensors and distributed optical sensing technologies, then travels up through an optical fiber to the surface, which greatly improves the accuracy of measurement data transmission.

Keywords: electro-optical hybrid cable, underground photoelectric composite cable, seismic cable, coiled tubing, real-time monitoring

Procedia PDF Downloads 117
2100 Factors Affecting Special Core Analysis Resistivity Parameters

Authors: Hassan Sbiga

Abstract:

Laboratory measurements methods were undertaken on core samples selected from three different fields (A, B, and C) from the Nubian Sandstone Formation of the central graben reservoirs in Libya. These measurements were conducted in order to determine the factors which affect resistivity parameters, and to investigate the effect of rock heterogeneity and wettability on these parameters. This included determining the saturation exponent (n) in the laboratory at two stages. The first stage was before wettability measurements were conducted on the samples, and the second stage was after the wettability measurements in order to find any effect on the saturation exponent. Another objective of this work was to quantify experimentally pores and porosity types (macro- and micro-porosity), which have an affect on the electrical properties, by integrating capillary pressure curves with other routine and special core analysis. These experiments were made for the first time to obtain a relation between pore size distribution and saturation exponent n. Changes were observed in the formation resistivity factor and cementation exponent due to ambient conditions and changes of overburden pressure. The cementation exponent also decreased from GHE-5 to GHE-8. Changes were also observed in the saturation exponent (n) and water saturation (Sw) before and after wettability measurement. Samples with an oil-wet tendency have higher irreducible brine saturation and higher Archie saturation exponent values than samples with an uniform water-wet surface. The experimental results indicate that there is a good relation between resistivity and pore type depending on the pore size. When oil begins to penetrate micro-pore systems in measurements of resistivity index versus brine saturation (after wettability measurement), a significant change in slope of the resistivity index relationship occurs.

Keywords: part of thesis, cementation, wettability, resistivity

Procedia PDF Downloads 225
2099 Impact of Human Resources Accounting on Employees' Performance in Organization

Authors: Hamid Saremi, Shida Hanafi

Abstract:

In an age of technology and economics, human capital has important and axial role in the organization and human resource accounting has a wide perception to key resources of organization i.e. human resources. Human resources accounting is new branch of accounting that has Short-lived and generally deals to a range of policies and measures that are related to various aspects of human resources and It gives importance to an organization's most important asset is its human resources and human resource management is the key to success in an organization and to achieve this important matter must review and evaluation of human resources data be with knowledge of accounting based on empirical studies and methods of measurement and reporting of human resources accounting information. Undoubtedly human resource management without information cannot be done and take decision and human resources accounting is practical way to inform the decision makers who are committed to harnessing human resources,, human resources accounting with applying accounting principles in the organization and is with conducting basic research on the extent of the of human resources accounting information" effect of employees' personal performance. In human resource accounting analysis and criteria and valuation of cost and manpower valuating is as the main resource in each Institute. Protection of human resources is a process that according to human resources accounting is for organization profitability. In fact, this type of accounting can be called as a major source in measurement and trends of costs and human resources valuation in each institution. What is the economic value of such assets? What is the amount of expenditures for education and training of professional individuals to value in asset account? What amount of funds spent should be considered as lost opportunity cost? In this paper, according to the literature of human resource accounting we have studied the human resources matter and its objectives and topic of the importance of human resource valuation on employee performance review and method of reporting of human resources according to different models.

Keywords: human resources, human resources, accounting, human capital, human resource management, valuation and cost of human resources, employees, performance, organization

Procedia PDF Downloads 523
2098 Craving Intensity Measurements in Opiate Addicts to Objectify the Opioid Substitution Therapy Dose and Reduce the Relapse Risk

Authors: Igna Brajevic-Gizdic, Magda Pletikosa Pavić

Abstract:

Introduction: Research in opiate addiction is increasingly indicating the importance of substitution therapy in opiate addicts. Opiate addiction is a chronic relapsing disease that includes craving as a criterion. Craving has been considered a predictor of a relapse, which is defined as a strong desire with an excessive need to take a substance. The study aimed to measure the intensity of craving using the VAS (visual analog scale) in opioid addicts taking Opioid Substitution Therapy (OST). Method: The total sample compromised of 30 participants in outpatient treatment. Two groups of opiate addicts were considered: Methadone-maintenance and buprenorphine-maintenance addicts. The participants completed the survey questionnaire during the outpatient treatment. Results: The results indicated high levels of cravings in patients during the treatment of OST, which is considered an important destabilization factor in abstinence. Thus, the use of methadone/buprenorphine dose should be considered. Conclusion: These findings provided an objective measurement of methadone /buprenorphine dosage and therapy options. The underdoes of OST can put patients at high risk of relapse, resulting in high levels of craving. Thus, when determining the therapeutic dose of OST, it is crucial to consider patients' cravings. This would achieve stabilization more quickly and avoid relapse in abstinence. Subjective physician assessment and patients' statement are the main criteria to determine OST dosage. Future studies should use larger sample sizes and focus on the importance of intensity craving measurement on OST to objectify methadone /buprenorphine dosage.

Keywords: buprenorphine, craving, heroin addicts, methadone, OST

Procedia PDF Downloads 66
2097 Window Analysis and Malmquist Index for Assessing Efficiency and Productivity Growth in a Pharmaceutical Industry

Authors: Abbas Al-Refaie, Ruba Najdawi, Nour Bata, Mohammad D. AL-Tahat

Abstract:

The pharmaceutical industry is an important component of health care systems throughout the world. Measurement of a production unit-performance is crucial in determining whether it has achieved its objectives or not. This paper applies data envelopment (DEA) window analysis to assess the efficiencies of two packaging lines; Allfill (new) and DP6, in the Penicillin plant in a Jordanian Medical Company in 2010. The CCR and BCC models are used to estimate the technical efficiency, pure technical efficiency, and scale efficiency. Further, the Malmquist productivity index is computed to measure then employed to assess productivity growth relative to a reference technology. Two primary issues are addressed in computation of Malmquist indices of productivity growth. The first issue is the measurement of productivity change over the period, while the second is to decompose changes in productivity into what are generally referred to as a ‘catching-up’ effect (efficiency change) and a ‘frontier shift’ effect (technological change). Results showed that DP6 line outperforms the Allfill in technical and pure technical efficiency. However, the Allfill line outperforms DP6 line in scale efficiency. The obtained efficiency values can guide production managers in taking effective decisions related to operation, management, and plant size. Moreover, both machines exhibit a clear fluctuations in technological change, which is the main reason for the positive total factor productivity change. That is, installing a new Allfill production line can be of great benefit to increasing productivity. In conclusions, the DEA window analysis combined with the Malmquist index are supportive measures in assessing efficiency and productivity in pharmaceutical industry.

Keywords: window analysis, malmquist index, efficiency, productivity

Procedia PDF Downloads 591
2096 The Relationships between Market Orientation and Competitiveness of Companies in Banking Sector

Authors: Patrik Jangl, Milan Mikuláštík

Abstract:

The objective of the paper is to measure and compare market orientation of Swiss and Czech banks, as well as examine statistically the degree of influence it has on competitiveness of the institutions. The analysis of market orientation is based on the collecting, analysis and correct interpretation of the data. Descriptive analysis of market orientation describe current situation. Research of relation of competitiveness and market orientation in the sector of big international banks is suggested with the expectation of existence of a strong relationship. Partially, the work served as reconfirmation of suitability of classic methodologies to measurement of banks’ market orientation. Two types of data were gathered. Firstly, by measuring subjectively perceived market orientation of a company and secondly, by quantifying its competitiveness. All data were collected from a sample of small, mid-sized and large banks. We used numerical secondary character data from the international statistical financial Bureau Van Dijk’s BANKSCOPE database. Statistical analysis led to the following results. Assuming classical market orientation measures to be scientifically justified, Czech banks are statistically less market-oriented than Swiss banks. Secondly, among small Swiss banks, which are not broadly internationally active, small relationship exist between market orientation measures and market share based competitiveness measures. Thirdly, among all Swiss banks, a strong relationship exists between market orientation measures and market share based competitiveness measures. Above results imply existence of a strong relation of this measure in sector of big international banks. A strong statistical relationship has been proven to exist between market orientation measures and equity/total assets ratio in Switzerland.

Keywords: market orientation, competitiveness, marketing strategy, measurement of market orientation, relation between market orientation and competitiveness, banking sector

Procedia PDF Downloads 450
2095 Compact Dual-band 4-MIMO Antenna Elements for 5G Mobile Applications

Authors: Fayad Ghawbar

Abstract:

The significance of the Multiple Input Multiple Output (MIMO) system in the 5G wireless communication system is essential to enhance channel capacity and provide a high data rate resulting in a need for dual-polarization in vertical and horizontal. Furthermore, size reduction is critical in a MIMO system to deploy more antenna elements requiring a compact, low-profile design. A compact dual-band 4-MIMO antenna system has been presented in this paper with pattern and polarization diversity. The proposed single antenna structure has been designed using two antenna layers with a C shape in the front layer and a partial slot with a U-shaped cut in the ground to enhance isolation. The single antenna is printed on an FR4 dielectric substrate with an overall size of 18 mm×18 mm×1.6 mm. The 4-MIMO antenna elements were printed orthogonally on an FR4 substrate with a size dimension of 36 × 36 × 1.6 mm3 with zero edge-to-edge separation distance. The proposed compact 4-MIMO antenna elements resonate at 3.4-3.6 GHz and 4.8-5 GHz. The s-parameters measurement and simulation results agree, especially in the lower band with a slight frequency shift of the measurement results at the upper band due to fabrication imperfection. The proposed design shows isolation above -15 dB and -22 dB across the 4-MIMO elements. The MIMO diversity performance has been evaluated in terms of efficiency, ECC, DG, TARC, and CCL. The total and radiation efficiency were above 50 % across all parameters in both frequency bands. The ECC values were lower than 0.10, and the DG results were about 9.95 dB in all antenna elements. TARC results exhibited values lower than 0 dB with values lower than -25 dB in all MIMO elements at the dual-bands. Moreover, the channel capacity losses in the MIMO system were depicted using CCL with values lower than 0.4 Bits/s/Hz.

Keywords: compact antennas, MIMO antenna system, 5G communication, dual band, ECC, DG, TARC

Procedia PDF Downloads 126
2094 Correlation Results Based on Magnetic Susceptibility Measurements by in-situ and Ex-Situ Measurements as Indicators of Environmental Changes Due to the Fertilizer Industry

Authors: Nurin Amalina Widityani, Adinda Syifa Azhari, Twin Aji Kusumagiani, Eleonora Agustine

Abstract:

Fertilizer industry activities contribute to environmental changes. Changes to the environment became one of a few problems in this era of globalization. Parameters that can be seen as criteria to identify changes in the environment can be seen from the aspects of physics, chemistry, and biology. One aspect that can be assessed quickly and efficiently to describe environmental change is the aspect of physics, one of which is the value of magnetic susceptibility (χ). The rock magnetism method can be used as a proxy indicator of environmental changes, seen from the value of magnetic susceptibility. The rock magnetism method is based on magnetic susceptibility studies to measure and classify the degree of pollutant elements that cause changes in the environment. This research was conducted in the area around the fertilizer plant, with five coring points on each track, each coring point a depth of 15 cm. Magnetic susceptibility measurements were performed by in-situ and ex-situ. In-situ measurements were carried out directly by using the SM30 tool by putting the tools on the soil surface at each measurement point and by that obtaining the value of the magnetic susceptibility. Meanwhile, ex-situ measurements are performed in the laboratory by using the Bartington MS2B tool’s susceptibility, which is done on a coring sample which is taken every 5 cm. In-situ measurement shows results that the value of magnetic susceptibility at the surface varies, with the lowest score on the second and fifth points with the -0.81 value and the highest value at the third point, with the score of 0,345. Ex-situ measurements can find out the variations of magnetic susceptibility values at each depth point of coring. At a depth of 0-5 cm, the value of the highest XLF = 494.8 (x10-8m³/kg) is at the third point, while the value of the lowest XLF = 187.1 (x10-8m³/kg) at first. At a depth of 6-10 cm, the highest value of the XLF was at the second point, which was 832.7 (x10-8m³/kg) while the lowest XLF is at the first point, at 211 (x10-8m³/kg). At a depth of 11-15 cm, the XLF’s highest value = 857.7 (x10-8m³/kg) is at the second point, whereas the value of the lowest XLF = 83.3 (x10-8m³/kg) is at the fifth point. Based on the in situ and exsit measurements, it can be seen that the highest magnetic susceptibility values from the surface samples are at the third point.

Keywords: magnetic susceptibility, fertilizer plant, Bartington MS2B, SM30

Procedia PDF Downloads 322
2093 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy

Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon

Abstract:

Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).

Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect

Procedia PDF Downloads 161
2092 International Financial Reporting Standards and the Quality of Banks Financial Statement Information: Evidence from an Emerging Market-Nigeria

Authors: Ugbede Onalo, Mohd Lizam, Ahmad Kaseri, Otache Innocent

Abstract:

Giving the paucity of studies on IFRS adoption and quality of banks accounting quality, particularly in emerging economies, this study is motivated to investigate whether the Nigeria decision to adopt IFRS beginning from 1 January 2012 is associated with high quality accounting measures. Consistent with prior literatures, this study measure quality of financial statement information using earnings measurement, timeliness of loss recognition and value relevance. A total of twenty Nigeria banks covering a period of six years (2008-2013) divided equally into three years each (2008, 2009, 2010) pre adoption period and (2011, 2012, 2013) post adoption period were investigated. Following prior studies eight models were in all employed to investigate earnings management, timeliness of loss recognition and value relevance of Nigeria bank accounting quality for the different reporting regimes. Results suggest that IFRS adoption is associated with minimal earnings management, timely recognition of losses and high value relevance of accounting information. Summarily, IFRS adoption engenders higher quality of banks financial statement information compared to local GAAP. Hence, this study recommends the global adoption of IFRS and that Nigeria banks should embrace good corporate governance practices.

Keywords: IFRS, SAS, quality of accounting information, earnings measurement, discretionary accruals, non-discretionary accruals, total accruals, Jones model, timeliness of loss recognition, value relevance

Procedia PDF Downloads 448
2091 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement

Authors: Hu Zhenxing, Gao Jianxin

Abstract:

Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.

Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D

Procedia PDF Downloads 478
2090 Compact LWIR Borescope Sensor for Thermal Imaging of 2D Surface Temperature in Gas-Turbine Engines

Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandar, Subodh Adhikari, Paul S. Hsu

Abstract:

The durability of a combustor in gas-turbine engines is a strong function of its component temperatures and requires good control of these temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system with optimized flow rates of cooling air is significantly important to elongate the lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate two-dimensional (2D) surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement in this application include the rmocouples, thermal wall paints, pyrometry, and phosphors. They have shown some disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve 2D high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of a combustor in gas-turbine engines and, furthermore, to develop more advanced gas-turbine engines.

Keywords: borescope, engine, low-wave-infrared, sensor

Procedia PDF Downloads 103
2089 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study

Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier

Abstract:

An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.

Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house

Procedia PDF Downloads 397
2088 About Some Results of the Determination of Alcohol in Moroccan Gasoline-Alcohol Mixtures

Authors: Mahacine Amrani

Abstract:

A simple and rapid method for the determination of alcohol in gasoline-alcohol mixtures using density measurements is described. The method can determine a minimum of 1% of alcohol by volume. The precision of the method is ± 3%.The method is more useful for field test in the quality assessment of alcohol blended fuels.

Keywords: gasoline-alcohol, mixture, alcohol determination, density, measurement, Morocco

Procedia PDF Downloads 301
2087 In-Plume H₂O, CO₂, H₂S and SO₂ in the Fumarolic Field of La Fossa Cone (Vulcano Island, Aeolian Archipelago)

Authors: Cinzia Federico, Gaetano Giudice, Salvatore Inguaggiato, Marco Liuzzo, Maria Pedone, Fabio Vita, Christoph Kern, Leonardo La Pica, Giovannella Pecoraino, Lorenzo Calderone, Vincenzo Francofonte

Abstract:

The periods of increased fumarolic activity at La Fossa volcano have been characterized, since early 80's, by changes in the gas chemistry and in the output rate of fumaroles. Excepting the direct measurements of the steam output from fumaroles performed from 1983 to 1995, the mass output of the single gas species has been recently measured, with various methods, only sporadically or for short periods. Since 2008, a scanning DOAS system is operating in the Palizzi area for the remote measurement of the in-plume SO₂ flux. On these grounds, the need of a cross-comparison of different methods for the in situ measurement of the output rate of different gas species is envisaged. In 2015, two field campaigns have been carried out, aimed at: 1. The mapping of the concentration of CO₂, H₂S and SO₂ in the fumarolic plume at 1 m from the surface, by using specific open-path diode tunable lasers (GasFinder Boreal Europe Ltd.) and an Active DOAS for SO₂, respectively; these measurements, coupled to simultaneous ultrasonic wind speed and meteorological data, have been elaborated to obtain the dispersion map and the output rate of single species in the overall fumarolic field; 2. The mapping of the concentrations of CO₂, H₂S, SO₂, H₂O in the fumarolic plume at 0.5 m from the soil, by using an integrated system, including IR spectrometers and specific electrochemical sensors; this has provided the concentration ratios of the analysed gas species and their distribution in the fumarolic field; 3. The in-fumarole sampling of vapour and measurement of the steam output, to validate the remote measurements. The dispersion map of CO₂, obtained from the tunable laser measurements, shows a maximum CO₂ concentration at 1m from the soil of 1000 ppmv along the rim, and 1800 ppmv in the inner slopes. As observed, the largest contribution derives from a wide fumarole of the inner-slope, despite its present outlet temperature of 230°C, almost 200°C lower than those measured at the rim fumaroles. Actually, fumaroles in the inner slopes are among those emitting the largest amount of magmatic vapour and, during the 1989-1991 crisis, reached the temperature of 690°C. The estimated CO₂ and H₂S fluxes are 400 t/d and 4.4 t/d, respectively. The coeval SO₂ flux, measured by the scanning DOAS system, is 9±1 t/d. The steam output, recomputed from CO₂ flux measurements, is about 2000 t/d. The various direct and remote methods (as described at points 1-3) have produced coherent results, which encourage to the use of daily and automatic DOAS SO₂ data, coupled with periodic in-plume measurements of different acidic gases, to obtain the total mass rates.

Keywords: DOAS, fumaroles, plume, tunable laser

Procedia PDF Downloads 376
2086 Measurement of Innovation Performance

Authors: M. Chobotová, Ž. Rylková

Abstract:

Time full of changes which is associated with globalization, tougher competition, changes in the structures of markets and economic downturn, that all force companies to think about their competitive advantages. These changes can bring the company a competitive advantage and that can help improve competitive position in the market. Policy of the European Union is focused on the fast growing innovative companies which quickly respond to market demands and consequently increase its competitiveness. To meet those objectives companies need the right conditions and support of their state.

Keywords: innovation, performance, measurements metrics, indices

Procedia PDF Downloads 357
2085 Efficient Study of Substrate Integrated Waveguide Devices

Authors: J. Hajri, H. Hrizi, N. Sboui, H. Baudrand

Abstract:

This paper presents a study of SIW circuits (Substrate Integrated Waveguide) with a rigorous and fast original approach based on Iterative process (WCIP). The theoretical suggested study is validated by the simulation of two different examples of SIW circuits. The obtained results are in good agreement with those of measurement and with software HFSS.

Keywords: convergence study, HFSS, modal decomposition, SIW circuits, WCIP method

Procedia PDF Downloads 485
2084 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics

Authors: A. Abbas, X. Tridon, J. Michelon

Abstract:

In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.

Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film

Procedia PDF Downloads 141
2083 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model

Authors: Didier Auroux, Vladimir Groza

Abstract:

This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.

Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization

Procedia PDF Downloads 294
2082 A Two Arm Double Parallel Randomized Controlled Trail of the Effects of Health Education Intervention on Insecticide Treated Nets Use and Its Practices among Pregnant Women Attending Antenatal Clinic: Study Protocol

Authors: Opara Monica, Suriani Ismail, Ahmad Iqmer Nashriq Mohd Nazan

Abstract:

The true magnitude of the mortality and morbidity attributable to malaria worldwide is, at best, a scientific guess, although it is not disputable that the greatest burden is in sub-Saharan Africa. Those at highest risk are children younger than 5 years and pregnant women, particularly primigravidae. Nationally, malaria remains the third leading cause of death and is still considered a major public health problem. Therefore, this study is aimed to assess the effectiveness of health education intervention on insecticide-treated net use and its practices among pregnant women attending antenatal clinics. Materials and Methods: This study will be an intervention study with two arms double parallel randomized controlled trial (blinded) to be conducted in 3 stages. The first stage will develop health belief model (HBM) program, while in the second stage, pregnant women will be recruited, assessed (baseline data), randomized into two arms of the study, and follow-up for six months. The third stage will evaluate the impact of the intervention on HBM and disseminate the findings. Data will be collected with the use of a structured questionnaire which will contain validated tools. The main outcome measurement will be the treatment effect using HBM, while data will be analysed using SPSS, version 22. Discussion: The study will contribute to the existing knowledge on hospital-based care programs for pregnant women in developing countries where the literature is scanty. It will generally give insight into the importance of HBM measurement in interventional studies on malaria and other related infectious diseases in this setting.

Keywords: malaria, health education, insecticide-treated nets, sub-Saharan Africa

Procedia PDF Downloads 96
2081 A New Criterion for Removal of Fouling Deposit

Authors: D. Bäcker, H. Chaves

Abstract:

The key to improve surface cleaning of the fouling is understanding of the mechanism of separation process of the deposit from the surface. The authors give basic principles of characterization of separation process and introduce a corresponding criterion. The developed criterion is a measure for the moment of separation of the deposit from the surface. For this purpose a new measurement technique is described.

Keywords: cleaning, fouling, separation, criterion

Procedia PDF Downloads 437
2080 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis

Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei

Abstract:

Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.

Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE

Procedia PDF Downloads 321
2079 Variations in the 7th Lumbar (L7) Vertebra Length Associated with Sacrocaudal Fusion in Greyhounds

Authors: Sa`ad M. Ismail, Hung-Hsun Yen, Christina M. Murray, Helen M. S. Davies

Abstract:

The lumbosacral junction (where the 7th lumbar vertebra (L7) articulates with the sacrum) is a clinically important area in the dog. The 7th lumbar vertebra (L7) is normally shorter than other lumbar vertebrae, and it has been reported that variations in the L7 length may be associated with other abnormal anatomical findings. These variations included the reduction or absence of the portion of the median sacral crest. In this study, 53 greyhound cadavers were placed in right lateral recumbency, and two lateral radiographs were taken of the lumbosacral region for each greyhound. The length of the 6th lumbar (L6) vertebra and L7 were measured using radiographic measurement software and was defined to be the mean of three lines drawn from the caudal to the cranial edge of the L6 and L7 vertebrae (a dorsal, middle, and ventral line) between specific landmarks. Sacrocaudal fusion was found in 41.5% of the greyhounds. The mean values of the length of L6, L7, and the ratio of the L6/L7 length of the greyhounds with sacrocaudal fusion were all greater than those with standard sacrums (three sacral vertebrae). There was a significant difference (P < 0.05) in the mean values of the length of L7 between the greyhounds without sacrocaudal fusion (mean = 29.64, SD ± 2.07) and those with sacrocaudal fusion (mean = 30.86, SD ± 1.80), but, there was no significant difference in the mean value of the length of the L6 measurement. Among different types of sacrocaudal fusion, the longest L7 was found in greyhounds with sacrum type D, intermediate length in those with sacrum type B, and the shortest was found in those with sacrums type C, and the mean values of the ratio of the L6/L7 were 1.11 (SD ± 0.043), 1.15, (SD ± 0.025), and 1.15 (SD ± 0.011) for the types B, C, and D respectively. No significant differences in the mean values of the length of L6 or L7 were found among the different types of sacrocaudal fusion. The occurrence of sacrocaudal fusion might affect direct anatomically connected structures such as the L7. The variation in the length of L7 between greyhounds with sacrocaudal fusion and those without may reflect the possible sequences of the process of fusion. Variations in the length of the L7 vertebra in greyhounds may be associated with the occurrence of sacrocaudal fusion. The variation in the vertebral length may affect the alignment and biomechanical properties of the sacrum and may alter the loading. We concluded that any variations in the sacrum anatomical features might change the function of the sacrum or the surrounding anatomical structures.

Keywords: biomechanics, Greyhound, sacrocaudal fusion, locomotion, 6th Lumbar (L6) Vertebra, 7th Lumbar (L7) Vertebra, ratio of the L6/L7 length

Procedia PDF Downloads 342
2078 Determination of Viscosity and Degree of Hydrogenation of Liquid Organic Hydrogen Carriers by Cavity Based Permittivity Measurement

Authors: I. Wiemann, N. Weiß, E. Schlücker, M. Wensing

Abstract:

A very promising alternative to compression or cryogenics is the chemical storage of hydrogen by liquid organic hydrogen carriers (LOHC). These carriers enable high energy density and allow, at the same time, efficient and safe storage under ambient conditions without leakage losses. Another benefit of this storage medium is the possibility of transporting it using already available infrastructure for the transport of fossil fuels. Efficient use of LOHC is related to precise process control, which requires a number of sensors in order to measure all relevant process parameters, for example, to measure the level of hydrogen loading of the carrier. The degree of loading is relevant for the energy content of the storage carrier and simultaneously represents the modification in the chemical structure of the carrier molecules. This variation can be detected in different physical properties like permittivity, viscosity, or density. E.g., each degree of loading corresponds to different viscosity values. Conventional measurements currently use invasive viscosity measurements or near-line measurements to obtain quantitative information. This study investigates permittivity changes resulting from changes in hydrogenation degree (chemical structure) and temperature. Based on calibration measurements, the degree of loading and temperature of LOHC can thus be determined by comparatively simple permittivity measurements in a cavity resonator. Subsequently, viscosity and density can be calculated. An experimental setup with a heating device and flow test bench was designed. By varying temperature in the range of 293,15 K -393,15 K and flow velocity up to 140 mm/s, corresponding changes in the resonation frequency were determined in the hundredths of the GHz range. This approach allows inline process monitoring of hydrogenation of the liquid organic hydrogen carrier (LOHC).

Keywords: hydrogen loading, LOHC, measurement, permittivity, viscosity

Procedia PDF Downloads 56