Search results for: benifit cost ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10030

Search results for: benifit cost ratio

760 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach

Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes

Abstract:

Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.

Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux

Procedia PDF Downloads 153
759 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver

Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar

Abstract:

Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.

Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy

Procedia PDF Downloads 179
758 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 53
757 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method

Authors: Berker Bayazit, Gulgun Kayakutlu

Abstract:

The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.

Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy

Procedia PDF Downloads 231
756 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 375
755 Laboratory Assessment of Electrical Vertical Drains in Composite Soils Using Kaolin and Bentonite Clays

Authors: Maher Z. Mohammed, Barry G. Clarke

Abstract:

As an alternative to stone column in fine grained soils, it is possible to create stiffened columns of soils using electroosmosis (electroosmotic piles). This program of this research is to establish the effectiveness and efficiency of the process in different soils. The aim of this study is to assess the capability of electroosmosis treatment in a range of composite soils. The combined electroosmotic and preloading equipment developed by Nizar and Clarke (2013) was used with an octagonal array of anodes surrounding a single cathode in a nominal 250mm diameter 300mm deep cylinder of soil and 80mm anode to cathode distance. Copper coiled springs were used as electrodes to allow the soil to consolidate either due to an external vertical applied load or electroosmosis. The equipment was modified to allow the temperature to be monitored during the test. Electroosmotic tests were performed on China Clay Grade E kaolin and calcium bentonite (Bentonex CB) mixed with sand fraction C (BS 1881 part 131) at different ratios by weight; (0, 23, 33, 50 and 67%) subjected to applied voltages (5, 10, 15 and 20). The soil slurry was prepared by mixing the dry soil with water to 1.5 times the liquid limit of the soil mixture. The mineralogical and geotechnical properties of the tested soils were measured before the electroosmosis treatment began. In the electroosmosis cell tests, the settlement, expelled water, variation of electrical current and applied voltage, and the generated heat was monitored during the test time for 24 osmotic tests. Water content was measured at the end of each test. The electroosmotic tests are divided into three phases. In Phase 1, 15 kPa was applied to simulate a working platform and produce a uniform soil which had been deposited as a slurry. 50 kPa was used in Phase 3 to simulate a surcharge load. The electroosmotic treatment was only performed during Phase 2 where a constant voltage was applied through the electrodes in addition to the 15 kPa pressure. This phase was stopped when no further water was expelled from the cell, indicating the electroosmotic process had stopped due to either the degradation of the anode or the flow due to the hydraulic gradient exactly balanced the electroosmotic flow resulting in no flow. Control tests for each soil mixture were carried out to assess the behaviour of the soil samples subjected to only an increase of vertical pressure, which is 15kPa in Phase 1 and 50kPa in Phase 3. Analysis of the experimental results from this study showed a significant dewatering effect on the soil slurries. The water discharged by the electroosmotic treatment process decreased as the sand content increased. Soil temperature increased significantly when electrical power was applied and drops when applied DC power turned off or when the electrode degraded. The highest increase in temperature was found in pure clays at higher applied voltage after about 8 hours of electroosmosis test.

Keywords: electrokinetic treatment, electrical conductivity, electroosmotic consolidation, electroosmosis permeability ratio

Procedia PDF Downloads 150
754 Organic Light Emitting Devices Based on Low Symmetry Coordination Structured Lanthanide Complexes

Authors: Zubair Ahmed, Andrea Barbieri

Abstract:

The need to reduce energy consumption has prompted a considerable research effort for developing alternative energy-efficient lighting systems to replace conventional light sources (i.e., incandescent and fluorescent lamps). Organic light emitting device (OLED) technology offers the distinctive possibility to fabricate large area flat devices by vacuum or solution processing. Lanthanide β-diketonates complexes, due to unique photophysical properties of Ln(III) ions, have been explored as emitting layers in OLED displays and in solid-state lighting (SSL) in order to achieve high efficiency and color purity. For such applications, the excellent photoluminescence quantum yield (PLQY) and stability are the two key points that can be achieved simply by selecting the proper organic ligands around the Ln ion in a coordination sphere. Regarding the strategies to enhance the PLQY, the most common is the suppression of the radiationless deactivation pathways due to the presence of high-frequency oscillators (e.g., OH, –CH groups) around the Ln centre. Recently, a different approach to maximize the PLQY of Ln(β-DKs) has been proposed (named 'Escalate Coordination Anisotropy', ECA). It is based on the assumption that coordinating the Ln ion with different ligands will break the centrosymmetry of the molecule leading to less forbidden transitions (loosening the constraints of the Laporte rule). The OLEDs based on such complexes are available, but with low efficiency and stability. In order to get efficient devices, there is a need to develop some new Ln complexes with enhanced PLQYs and stabilities. For this purpose, the Ln complexes, both visible and (NIR) emitting, of variant coordination structures based on the various fluorinated/non-fluorinated β-diketones and O/N-donor neutral ligands were synthesized using a one step in situ method. In this method, the β-diketones, base, LnCl₃.nH₂O and neutral ligands were mixed in a 3:3:1:1 M ratio in ethanol that gave air and moisture stable complexes. Further, they were characterized by means of elemental analysis, NMR spectroscopy and single crystal X-ray diffraction. Thereafter, their photophysical properties were studied to select the best complexes for the fabrication of stable and efficient OLEDs. Finally, the OLEDs were fabricated and investigated using these complexes as emitting layers along with other organic layers like NPB,N,N′-Di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (hole-transporting layer), BCP, 2,9-Dimethyl-4,7-diphenyl-1,10-phenanthroline (hole-blocker) and Alq3 (electron-transporting layer). The layers were sequentially deposited under high vacuum environment by thermal evaporation onto ITO glass substrates. Moreover, co-deposition techniques were used to improve charge transport in the devices and to avoid quenching phenomena. The devices show strong electroluminescence at 612, 998, 1064 and 1534 nm corresponding to ⁵D₀ →⁷F₂(Eu), ²F₅/₂ → ²F₇/₂ (Yb), ⁴F₃/₂→ ⁴I₉/₂ (Nd) and ⁴I1₃/₂→ ⁴I1₅/₂ (Er). All the devices fabricated show good efficiency as well as stability.

Keywords: electroluminescence, lanthanides, paramagnetic NMR, photoluminescence

Procedia PDF Downloads 102
753 A Method to Identify the Critical Delay Factors for Building Maintenance Projects of Institutional Buildings: Case Study of Eastern India

Authors: Shankha Pratim Bhattacharya

Abstract:

In general building repair and renovation projects are minor in nature. It requires less attention as the primary cost involvement is relatively small. Although the building repair and maintenance projects look simple, it involves much complexity during execution. Many of the present research indicate that few uncertain situations are usually linked with maintenance projects. Those may not be read properly in the planning stage of the projects, and finally, lead to time overrun. Building repair and maintenance become essential and periodical after commissioning of the building. In Institutional buildings, the regular maintenance projects also include addition –alteration, modification activities. Increase in the student admission, new departments, and sections, new laboratories and workshops, up gradation of existing laboratories are very common in the institutional buildings in the developing nations like India. The project becomes very critical because it undergoes space problem, architectural design issues, structural modification, etc. One of the prime factors in the institutional building maintenance and modification project is the time constraint. Mostly it required being executed a specific non-work time period. The present research considered only the institutional buildings of the Eastern part of India to analyse the repair and maintenance project delay. A general survey was conducted among the technical institutes to find the causes and corresponding nature of construction delay factors. Five technical institutes are considered in the present study with repair, renovation, modification and extension type of projects. Construction delay factors are categorically subdivided into four groups namely, material, manpower (works), Contract and Site. The survey data are collected for the nature of delay responsible for a specific project and the absolute amount of delay through proposed and actual duration of work. In the first stage of the paper, a relative importance index (RII) is proposed for the delay factors. The occurrence of the delay factors is also judged by its frequency-severity nature. Finally, the delay factors are then rated and linked with the type of work. In the second stage, a regression analysis is executed to establish an empirical relationship between the actual time of a project and the percentage of delay. It also indicates the impact of the factors for delay responsibility. Ultimately, the present paper makes an effort to identify the critical delay factors for the repair and renovation type project in the Eastern Indian Institutional building.

Keywords: delay factor, institutional building, maintenance, relative importance index, regression analysis, repair

Procedia PDF Downloads 238
752 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model

Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond

Abstract:

The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.

Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance

Procedia PDF Downloads 282
751 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning

Authors: Jan Schmitt, Sophie Fischer

Abstract:

After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.

Keywords: business strategy, climate change, climate adaption, game-based learning

Procedia PDF Downloads 189
750 Psycho-Social Associates of Deliberate Self-Harm in Rural Sri Lanka

Authors: P. H. G. J. Pushpakumara, A. M. P. Adikari, S. U. B. Tennakoon, Ranil Abeysinghe, Andrew Dawson

Abstract:

Introduction: Deliberate Self-harm (DSH) is a global public health problem. Since 1950, suicide rates in Sri Lanka are among the highest national rates in the world. It has become an increasingly common response to emotional distress in young adults. However, it remains unclear the reason for this occurrence. Objectives: The descriptive component of this study was conducted to identify of epidemiological pattern of DSH and suicide in Kurunegala District (KD). Assessment of association between DSH socio-cultural, economical and psychological factors were the objectives of the case control component. Methods: Prospective data collection of DSH and suicide was conducted at all (46) hospitals and all (28) police stations in the KD for thirty six months, from 1st January 2011, as the descriptive component. Case control component was conducted at T.H. Kurunegala (THK) for eighteen months duration, from 1st July 2011. Cases (n=439) were randomly selected from a block of 7 consecutively admitted consenting DSP patients using a computer program. Age, sex and residential divisional secretariat division one to one matched, individuals were randomly selected as controls from patients presented to Out Patient Department. Structured Clinical Interview for DSM-IV-TR Axis I and II Disorders was used to diagnose psychiatric disorders. Validated tools were used to measure other constructs. Results: Suicide incidences in KD were, 21.6, 20.7 and 24.3 per 100,000 population in 2011- 2013 (Male:female ratio 5.7, 4.4 and 6.4). 60% of suicides were due to poisoning. DSP incidences were 205.4, 248.3 and 202.5 per 100,000 population in 2011- 2013. Highest age standardized male DSP incidence reported in 20-24 years (769.6/100,000) and female in 15-19 years (1304.0/100,000). Bing married (age >25 years), monthly family income less than Rs.30,000, not achieving G.C.E (O/L) qualifications, a school drop-out, not in a permanent position in occupation, being a manual and an own account worker, were significantly associated with DSP. Perceiving the quality of relationship as bad or very bad with parents, spouse/ girlfriend/ boyfriend and sibling as associated with 8, 40 and 10.5 times higher risk respectively. Feeling and experiences of neglect, other emotional abuses, feeling of insecurity with the family, in child hood, and having a contact history carried an excess risk for DSP. Cases were less likely to seek help. Further, they had significantly lower scores for life skills and life skills application ability. 25.6% DSH patients had DSM TR axis-I and/or TR axis-II disorder. The presence of psychiatric disorder carried 7.7 (95% CI 4.3 – 13.8) times higher risk for DSP. Conclusion: In general, pattern of DSH and suicide is, unique, different from developed, upper and middle income and lower and middle income countries. It is a learned way of expressing emotions in difficult situations of vulnerable people.

Keywords: deliberate self-harm, help-seeking, life-skills, mental- health, psychological, social, suicide

Procedia PDF Downloads 201
749 Study of Interplanetary Transfer Trajectories via Vicinity of Libration Points

Authors: Zhe Xu, Jian Li, Lvping Li, Zezheng Dong

Abstract:

This work is to study an optimized transfer strategy of connecting Earth and Mars via the vicinity of libration points, which have been playing an increasingly important role in trajectory designing on a deep space mission, and can be used as an effective alternative solution for Earth-Mars direct transfer mission in some unusual cases. The use of vicinity of libration points of the sun-planet body system is becoming potential gateways for future interplanetary transfer missions. By adding fuel to cargo spaceships located in spaceports, the interplanetary round-trip exploration shuttle mission of such a system facility can also be a reusable transportation system. In addition, in some cases, when the S/C cruising through invariant manifolds, it can also save a large amount of fuel. Therefore, it is necessary to make an effort on looking for efficient transfer strategies using variant manifold about libration points. It was found that Earth L1/L2 Halo/Lyapunov orbits and Mars L2/L1 Halo/Lyapunov orbits could be connected with reasonable fuel consumption and flight duration with appropriate design. In the paper, the halo hopping method and coplanar circular method are briefly introduced. The former used differential corrections to systematically generate low ΔV transfer trajectories between interplanetary manifolds, while the latter discussed escape and capture trajectories to and from Halo orbits by using impulsive maneuvers at periapsis of the manifolds about libration points. In the following, designs of transfer strategies of the two methods are shown here. A comparative performance analysis of interplanetary transfer strategies of the two methods is carried out accordingly. Comparison of strategies is based on two main criteria: the total fuel consumption required to perform the transfer and the time of flight, as mentioned above. The numeric results showed that the coplanar circular method procedure has certain advantages in cost or duration. Finally, optimized transfer strategy with engineering constraints is searched out and examined to be an effective alternative solution for a given direct transfer mission. This paper investigated main methods and gave out an optimized solution in interplanetary transfer via the vicinity of libration points. Although most of Earth-Mars mission planners prefer to build up a direct transfer strategy for the mission due to its advantage in relatively short time of flight, the strategies given in the paper could still be regard as effective alternative solutions since the advantages mentioned above and longer departure window than direct transfer.

Keywords: circular restricted three-body problem, halo/Lyapunov orbit, invariant manifolds, libration points

Procedia PDF Downloads 223
748 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 154
747 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems

Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia

Abstract:

The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.

Keywords: cloud computing, data management, multi-tenancy, requirements, security

Procedia PDF Downloads 139
746 Assessing the Contribution of Informal Buildings to Energy Inefficiency in Kenya: A Case of Mukuru Slums

Authors: Bessy Thuranira

Abstract:

Buildings, as they are designed and used, may contribute to serious environmental problems because of excessive consumption of energy and other natural resources. Buildings in the informal settlements particularly, due to their unplanned physical structure and design, have significantly contributed the global energy problematic scenario typified by high-level inefficiencies. Energy used in buildings in Africa is estimated to be the highest of the total national electricity consumption. Over the last decade, assessments of energy consumption and efficiency/inefficiency has focused on formal and modern buildings. This study seeks to go off the beaten path, by focusing on energy use in informal settlements. Operationally, it sought to establish the contribution of informal buildings in the overall energy consumption in the city and the country at large. This study was carried out in Mukuru kwa Reuben informal settlement where there is distinct manifestation of different settlement morphologies within a small locality. The research narrowed down to three villages (Mombasa, Kosovo and Railway villages) within the settlement, that were representative of the different slum housing typologies. Due to the unpredictability nature and informality in slums, this study takes a multi-methodology approach. Detailed energy audits and measurements are carried out to predict total building consumption, and document building design and envelope, typology, materials and occupancy levels. Moreover, the study uses semi-structured interviews and to access energy supply, cost, access and consumption patterns. Observations and photographs are also used to shed more light on these parameters. The study reveals the high energy inefficiencies in slum buildings mainly related to sub-standard equipment and appliances, building design and settlement layout, poor access and utilization/consumption patterns of energy. The impacts of this inefficiency are high economic burden to the poor, high levels of pollution, lack of thermal comfort and emissions to the environment. The study highlights a set of urban planning and building design principles that can be used to retrofit slums into more energy efficient settlements. The study explores principles of responsive settlement layouts/plans and appropriate building designs that use the beneficial elements of nature to achieve natural lighting, natural ventilation, and solar control to create thermally comfortable, energy efficient, and environmentally responsive buildings/settlements. As energy efficiency in informal settlements is a relatively less explored area of efficiency, it requires further research and policy recommendations, for which this paper will set a background.

Keywords: energy efficiency, informal settlements, renewable energy, settlement layout

Procedia PDF Downloads 113
745 Multi-Criteria Nautical Ports Capacity and Services Planning

Authors: N. Perko, N. Kavran, M. Bukljas, I. Berbic

Abstract:

This paper is a result of implemented research on proposed introduced methodology for nautical ports capacity planning by introducing a multi-criteria approach of defined criteria and impacts at the Adriatic Sea. The purpose was analysing the determinants -characteristics of infrastructure and services of nautical ports capacity allocated, especially nowadays due to COVID-19 pandemic, as crucial for the successful operation of nautical ports. Giving the importance of the defined priorities for short-term and long-term planning is essential not only in terms of the development of nautical tourism but also in terms of developing the maritime system, but unfortunately, this is not always carried out. Evaluation of the use of resources should follow from a detailed analysis of all aspects of resources bearing in mind that nautical tourism used resources in a sustainable manner and generate effects in the tourism and maritime sectors. Consequently, the identified multiplier effect of nautical tourism, which should be defined and quantified in detail, should be one of the major competitive products on the Croatian Adriatic and the Mediterranean. Research of nautical tourism is necessary to quantify the effects and required planning system development. In the future, the greatest threat to the long-term sustainable development of nautical tourism can be its further uncontrolled or unlimited and undirected development, especially under pressure markedly higher demand than supply for new moorings in the Mediterranean. Results of this implemented research are applicable to nautical ports management and decision-makers of maritime transport system development. This paper will present implemented research and obtained result-developed methodology for nautical port capacity planning -port capacity planning multi-criteria decision-making. A proposed methodological approach of multi-criteria capacity planning includes four criteria (spatial - transport, cost - infrastructure, ecological and organizational criteria, and additional services). The importance of the criteria and sub-criteria is evaluated and carried out as the basis for sensitivity analysis of the importance of the criteria and sub-criteria. Based on the analysis of the identified and quantified importance of certain criteria and sub-criteria, as well as sensitivity analysis and analysis of changes of the quantified importance, scientific and applicable results will be presented. These obtained results have practical applicability by management of nautical ports in the planning of increasing capacity and further development and for the adaptation of existing nautical ports. Obtained research is applicable and replicable in other seas, and results are especially important and useful in this COVID-19 pandemic challenging maritime development framework.

Keywords: Adriatic Sea, capacity, infrastructures, maritime system, methodology, nautical ports, nautical tourism, service

Procedia PDF Downloads 169
744 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination

Authors: Gilberto Goracci, Fabio Curti

Abstract:

This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.

Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field

Procedia PDF Downloads 88
743 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique

Authors: Harpal Singh, Sakshi Batra

Abstract:

The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.

Keywords: discrete wavelet transform, robustness, video watermarking, watermark

Procedia PDF Downloads 213
742 Study of Mechanical Properties of Large Scale Flexible Silicon Solar Modules on the Various Substrates

Authors: M. Maleczek, Leszek Bogdan, Kazimierz Drabczyk, Agnieszka Iwan

Abstract:

Crystalline silicon (Si) solar cells are the main product in the market among the various photovoltaic technologies concerning such advantages as: material richness, high carrier mobilities, broad spectral absorption range and established technology. However, photovoltaic technology on the stiff substrates are heavier, more fragile and less cost-effective than devices on the flexible substrates to be applied in special applications. The main goal of our work was to incorporate silicon solar cells into various fabric, without any change of the electrical and mechanical parameters of devices. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. In our work, the polyamide or polyester fabrics were used as a flexible substrate in the created devices. Applied fabrics differ in tensile and tear strength. All investigated polyamide fabrics are resistant to weathering and UV, while polyester ones is resistant to ozone, water and ageing. The examined fabrics are tight at 100 cm water per 2 hours. In our work, commercial silicon solar cells with the size 156 × 156 mm were cut into nine parts (called single solar cells) by diamond saw and laser. Gap and edge after cutting of solar cells were checked by transmission electron microscope (TEM) to study morphology and quality of the prepared single solar cells. Modules with the size of 160 × 70 cm (containing about 80 single solar cells) were created and investigated by electrical and mechanical methods. Weight of constructed module is about 1.9 kg. Three types of solar cell architectures such as: -fabric/EVA/Si solar cell/EVA/film for lamination, -backsheet PET/EVA/Si solar cell/EVA/film for lamination, -fabric/EVA/Si solar cell/EVA/tempered glass, were investigated taking into consideration type of fabric and lamination process together with the size of solar cells. In investigated devices EVA, it is ethylene-vinyl acetate, while PET - polyethylene terephthalate. Depend on the lamination process and compatibility of textile with solar cell an efficiency of investigated flexible silicon solar cells was in the range of 9.44-16.64 %. Multi folding and unfolding of flexible module has no impact on its efficiency as was detected by Instron equipment. Power (P) of constructed solar module is 30 W, while voltage about 36 V. Finally, solar panel contains five modules with the polyamide fabric and tempered glass will be produced commercially for different applications (dual use).

Keywords: flexible devices, mechanical properties, silicon solar cells, textiles

Procedia PDF Downloads 161
741 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA

Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray

Abstract:

Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.

Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration

Procedia PDF Downloads 56
740 Euthanasia Reconsidered: Voting and Multicriteria Decision-Making in Medical Ethics

Authors: J. Hakula

Abstract:

Discussion on euthanasia is a continuous process. Euthanasia is defined as 'deliberately ending a patient's life by administering life-ending drugs at the patient's explicit request'. With few exceptions, worldwide in most countries human societies have not been able to agree on some fundamental issues concerning ultimate decisions of life and death. Outranking methods in voting oriented social choice theory and multicriteria decision-making (MCDM) can be applied to issues in medical ethics. There is a wide range of voting methods, and using different methods the same group of voters can end up with different outcomes. In the MCDM context, decision alternatives can be substituted for candidates, and criteria for voters. The view chosen here is that of a single decision-maker. Initially, three alternatives and three criteria are chosen. Pairwise and basic positional voting rules - plurality, anti-plurality and the Borda count - are applied. In the MCDM solution, criteria are put weights by giving them the more 'votes'; the more important the decision-maker ranks them. A hypothetical example on evaluating properties of euthanasia consists of three alternatives A, B, and C, which are ranked according to three criteria - the patient’s willingness to cooperate, general action orientation (active/passive), and cost-effectiveness - the criteria having weights 7, 5, and 4, respectively. Using the plurality rule and the weights given to criteria, A is the best alternative, B and C thereafter. In pairwise comparisons, both B and C defeat A with weight scores 7 to 9. On the other hand, B is defeated by C with weights 11 to 5. Thus, C (i.e. the so-called Condorcet winner) defeats both A and B. The best alternative using the plurality principle is not necessarily the best in the pairwise sense, the conflict remaining unsolved with or without additional weights. Positional rules are sensitive to variations in alternative sets. In the example above, the plurality rule gives the rank ABC. If we leave out C, the plurality ranking between A and B results in BA. Withdrawing B or A the ranking is CA and CB, respectively. In pairwise comparisons an analogous problem emerges when the number of criteria is varied. Cyclic preferences may lead to a total tie, and no (rational) choice between the alternatives can be made. In conclusion, the choice of the best commitment to re-evaluate euthanasia, with criteria left unchanged, depends entirely on the evaluation method used. The right strategies matter, too. Future studies might concern the problem of an abstention - a situation where voters do not vote - and still their best candidate may win. Or vice versa, actively giving the ballot to their first rank choice might lead to a total loss. In MCDM terms, a decision might occur where some central criteria are not actively involved in the best choice made.

Keywords: medical ethics, euthanasia, voting methods, multicriteria decision-making

Procedia PDF Downloads 141
739 Concrete Compressive Strengths of Major Existing Buildings in Kuwait

Authors: Zafer Sakka, Husain Al-Khaiat

Abstract:

Due to social and economic considerations, owners all over the world desire to keep and use existing structures, including aging ones. However, these structures, especially those that are dear, need accurate condition assessment, and proper safety evaluation. More than half of the budget spent on construction activities in developed countries is related to the repair and maintenance of these reinforced concrete (R/C) structures. Also, periodical evaluation and assessment of relatively old concrete structures are vital and imperative. If the evaluation and assessment of structural components of a particular aging R/C structure reveal that repairs are essential for these components, these repairs should not be delayed. Delaying the repairs has the potential of losing serviceability of the whole structure and/or causing total failure and collapse of the structure. In addition, if repairs are delayed, the cost of maintenance will skyrocket as well. It can also be concluded from the above that the assessment of existing needs to receive more consideration and thought from the structural engineering societies and professionals. Ten major existing structures in Kuwait city that were constructed in the 1970s were assessed for structural reliability and integrity. Numerous concrete samples were extracted from the structural systems of the investigated buildings. This paper presents the results of the compressive strength tests that were conducted on the extracted cores. The results are compared for the buildings’ columns and beams elements and compared with the design strengths. The collected data were statistically analyzed. The average compressive strengths of the concrete cores that were extracted from the ten buildings had a large variation. The lowest average compressive strength for one of the buildings was 158 kg/cm². This building was deemed unsafe and economically unfeasible to be repaired; accordingly, it was demolished. The other buildings had an average compressive strengths fall in the range 215-317 kg/cm². Poor construction practices were the main cause for the strengths. Although most of the drawings and information for these buildings were lost during the invasion of Kuwait in 1990, however, information gathered indicated that the design strengths of the beams and columns for most of these buildings were in the range of 280-400 kg/cm². Following the study, measures were taken to rehabilitate the buildings for safety. The mean compressive strength for all cores taken from beams and columns of the ten buildings was 256.7 kg/cm². The values range was 139 to 394 kg/cm². For columns, the mean was 250.4 kg/cm², and the values ranged from 137 to 394 kg/cm². However, the mean compressive strength for the beams was higher than that of columns. It was 285.9 kg/cm², and the range was 181 to 383 kg/cm². In addition to the concrete cores that were extracted from the ten buildings, the 28-day compressive strengths of more than 24,660 concrete cubes were collected from a major ready-mixed concrete supplier in Kuwait. The data represented four different grades of ready-mix concrete (250, 300, 350, and 400 kg/cm²) manufactured between the year 2003 and 2018. The average concrete compressive strength for the different concrete grades (250, 300, 350 and 400 kg/cm²) was found to be 318, 382, 453 and 504 kg/cm², respectively, and the coefficients of variations were found to be 0.138, 0.140, 0.157 and 0.131, respectively.

Keywords: concrete compressive strength, concrete structures, existing building, statistical analysis.

Procedia PDF Downloads 103
738 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems

Authors: Ramprasad Srinivasan

Abstract:

Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.

Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation

Procedia PDF Downloads 56
737 Strengthening by Assessment: A Case Study of Rail Bridges

Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas

Abstract:

The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.

Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening

Procedia PDF Downloads 295
736 Analyzing the Relationship between Physical Fitness and Academic Achievement in Chinese High School Students

Authors: Juan Li, Hui Tian, Min Wang

Abstract:

In China, under the considerable pressure of 'Gaokao' –the highly competitive college entrance examination, high school teachers and parents often worry that doing physical activity would take away the students’ precious study time and may have a negative impact on the academic grades. There was a tendency to achieve high academic scores at the cost of physical exercise. Therefore, the purpose of this study was to examine the relationship between the physical fitness and academic achievement of Chinese high school students. The participants were 968 grade one (N=457) and grade two students (N=511) with an average age of 16 years from three high schools of different levels in Beijing, China. 479 were boys, and 489 were girls. One of the schools is a top high school in China, another is a key high school in Beijing, and the other is an ordinary high school. All analyses were weighted using SAS 9.4 to ensure the representatives of the sample. The weights were based on 12 strata of schools, sex, and grades. Physical fitness data were collected using the scores of the National Physical Fitness Test, which is an annual official test administered by the Ministry of Education in China. It includes 50m run, sits and reach test, standing long jump, 1000m run (for boys), 800m run (for girls), pull-ups for 1 minute (for boys), and bent-knee sit-ups for 1 minute (for girls). The test is an overall evaluation of the students’ physical health on the major indexes of strength, endurance, flexibility, and cardiorespiratory function. Academic scores were obtained from the three schools with the students’ consent. The statistical analysis was conducted with SPSS 24. Independent-Samples T-test was used to examine the gender group differences. Spearman’s Rho bivariate correlation was adopted to test for associations between physical test results and academic performance. Statistical significance was set at p<.05. The study found that girls obtained higher fitness scores than boys (p=.000). The girls’ physical fitness test scores were positively associated with the total academic grades (rs=.103, p=.029), English (rs=.096, p=.042), physics (rs=.202, p=.000) and chemistry scores (rs=.131, p=.009). No significant relationship was observed in boys. Cardiorespiratory fitness had a positive association with physics (rs=.196, p=.000) and biology scores (rs=.168, p=.023) in girls, and with English score in boys (rs=.104, p=.029). A possible explanation for the greater association between physical fitness and academic achievement in girls rather than boys was that girls showed stronger motivation in achieving high scores in whether academic tests or fitness tests. More driven by the test results, girls probably tended to invest more time and energy in training for the fitness test. Higher fitness levels were associated with an academic benefit among girls generally in Chinese high schools. Therefore, physical fitness needs to be given greater emphasis among Chinese adolescents and gender differences need to be taken into consideration.

Keywords: physical fitness; adolescents; academic achievement; high school

Procedia PDF Downloads 112
735 Effects of the Quality Construction of Public Construction in Taiwan to Implementation Three Levels Quality Management Institution

Authors: Hsin-Hung Lai, Wei Lo

Abstract:

Whether it is in virtue or vice for a construction quality of public construction project, it is one of the important indicators for national economic development and overall construction, the impact on the quality of national life is very deep. In recent years, a number of scandal of public construction project occurred, the requirements of the government agencies and the public require the quality of construction of public construction project are getting stricter than ever, the three-level public construction project construction quality of quality control system implemented by the government has a profound impact. This study mainly aggregated the evolution of ISO 9000 quality control system, the difference between the practice of implementing management of construction quality by many countries and three-level quality control of our country, so we explored and found that almost all projects of enhancing construction quality are dominated by civil organizations in foreign countries, whereas, it is induced by the national power in our country and develop our three-level quality control system and audit mechanism based on IOS system and implement the works by legislation, we also explored its enhancement and relevance with construction quality of public construction project that are intervened by such system and national power, and it really presents the effectiveness of construction quality been enhanced by the audited result. The three-level quality control system of our country to promote the policy of public construction project is almost same with the quality control system of many developed countries; however our country mainly implements such system on public construction project only, we promote the three-level quality control system is for enhancing the quality of public construction project, for establishing effective quality management system, so as to urge, correct and prevent the defects of quality management by manufacturers, whereas, those developed countries is comprehensively promoting (both public construction project and civil construction) such system. Therefore, this study is to explore the scope for public construction project only; the most important is the quality recognition by the executor, either good quality or deterioration is not a single event, there is a certain procedure extends from the demand and feasibility analysis, design, tendering, contracting, construction performance, inspection, continuous improvement, completion and acceptance, transferring and meeting the needs of the users, all of mentioned above have a causal relationship and it is a systemic problems. So the best construction quality would be manufactured and managed by reasonable cost if it is by extensive thinking and be preventive. We aggregated the implemented results in the past 10 years (2005 to 2015), the audited results of both in central units and local ones were slightly increased in A-grade while those listed in B-grade were decreased, although the levels were not evidently upgraded, yet, such result presents that the construction quality of concept of manufacturers are improving, and the construction quality has been established in the design stage, thus it is relatively beneficial to the enhancement of construction quality of overall public construction project.

Keywords: ISO 9000, three-level quality control system, audit and review mechanism for construction implementation, quality of construction implementation

Procedia PDF Downloads 326
734 Evaluation of Antarctic Bacteria as Potential Producers of Cellulolytic Enzymes of Industrial Interest

Authors: Claudio Lamilla, Andrés Santos, Vicente Llanquinao, Jocelyn Hermosilla, Leticia Barrientos

Abstract:

The industry in general is very interested in improving and optimizing industrial processes in order to reduce the costs involved in obtaining raw materials and production. Thus, an interesting and cost-effective alternative is the incorporation of bioactive metabolites in such processes, being an example of this enzymes which catalyze efficiently a large number of enzymatic reactions of industrial and biotechnological interest. In the search for new sources of these active metabolites, Antarctica is one of the least explored places on our planet where the most drastic cold conditions, salinity, UVA-UVB and liquid water available are present, features that have shaped all life in this very harsh environment, especially bacteria that live in different Antarctic ecosystems, which have had to develop different strategies to adapt to these conditions, producing unique biochemical strategies. In this work the production of cellulolytic enzymes of seven bacterial strains isolated from marine sediments at different sites in the Antarctic was evaluated. Isolation of the strains was performed using serial dilutions in the culture medium at M115°C. The identification of the strains was performed using universal primers (27F and 1492R). The enzyme activity assays were performed on R2A medium, carboxy methyl cellulose (CMC)was added as substrate. Degradation of the substrate was revealed by adding Lugol. The results show that four of the tested strains produce enzymes which degrade CMC substrate. The molecular identifications, showed that these bacteria belong to the genus Streptomyces and Pseudoalteromonas, being Streptomyces strain who showed the highest activity. Only some bacteria in marine sediments have the ability to produce these enzymes, perhaps due to their greater adaptability to degrade at temperatures bordering zero degrees Celsius, some algae that are abundant in this environment and have cellulose as the main structure. The discovery of new enzymes adapted to cold is of great industrial interest, especially for paper, textiles, detergents, biofuels, food and agriculture. These enzymes represent 8% of industrial demand worldwide and is expected to increase their demand in the coming years. Mainly in the paper and food industry are required in extraction processes starch, protein and juices, as well as the animal feed industry where treating vegetables and grains helps improve the nutritional value of the food, all this clearly puts Antarctic microorganisms and their enzymes specifically as a potential contribution to industry and the novel biotechnological applications.

Keywords: antarctic, bacteria, biotechnological, cellulolytic enzymes

Procedia PDF Downloads 282
733 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 120
732 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus Aureus of Isolated from Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia

Authors: Haftay Abraha Tadesse

Abstract:

Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and Public Health Significance of Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Socio-demographic data and Public Health Significance were collected using a predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using an ice box to Mekelle University, College of Veterinary Sciences, for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by the disc diffusion method. Data obtained were cleaned and entered into STATA 22.0 and a logistic regression model with odds ratio was calculated to assess the association of risk factors with bacterial contamination. A P-value < 0.05 was considered statistically significant. Results: In the present study, 88 out of 250 (35.2%) were found to be contaminated with Staphylococcus aureus. Among the raw meat specimens, the positivity rate of Staphylococcus aureus was 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risks, factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35 3.35) was found to be statistically significant and have associated with Staphylococcus aureus contamination. All isolates of thirty-seven of Staphylococcus aureus were checked and displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. Whereas the showed resistance to cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aureus isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin, whereas they showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi-drug resistance pattern for Staphylococcus aureus was 90% and 100% of butchery and abattoir houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the development of hand washing behavior and availability of safe water in the butchery houses to reduce the burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics.

Keywords: abattoir house, AMR, butchery house, S. aureus

Procedia PDF Downloads 74
731 Capital Market Reaction to Governance and Disclosure Violations: Evidence from the Saudi Arabian Capital Market

Authors: Nasser Alsadoun

Abstract:

Today's companies in Saudi Arabian capital market must comply with strict criteria and adhere to rigid corporate governance rules and continuous disclosure requirements. Unlike other regulators in the region, decision makers of the Capital Market Authority (hereafter CMA) of Saudi Arabia believes that the announcements of economic sanctions and penalties for non-compliance firms will foster more effective regulatory compliance and hence improve the quality of financial reporting. An implied argument put forward by the opponents, however, states that such penalties are unnecessary and stated to be onerous for non-compliance firms. Over that last years, the CMA has publicly announced several economic fines levied on some listed companies for their failing to comply with corporate governance and continuous disclosure regulation clauses, with the amount of fine levied ranges between 50,000 SR to 100,000 SR for each failing. Economic theory suggests that rational investors make decisions based on a cost-benefit principal. The regulatory intervention made by CMA on the announcement of economic sanctions has been costly to the society (economy) hoping that it improves the transparency of financial statements. It is argued, therefore, that threat of regulators and economic sanctions will provide incentives for firms’ managers to report more relevant and reliable accounting information, and the benefit of such announcements is likely to be reflected in the context of the quality of the financial reports. Yet, the economic consequences of the revealed fines announcement for non-compliance firms in Saudi Arabian market have not been examined. Thus, this study attempts to empirically examine whether market participants are pricing the supposed benefits of rigid governance and disclosure rules in the Saudi market. The study employs an event study methodology to assess the impact of CMA economic sanctions announcements on the market price of non-compliance firms. The study also estimates and examines bid–ask spread behavior of violated firms around the CMA announcements. The findings indicate that the CMA fines announcements for failing to comply with governance and disclosure rules do not appear to play any significant role in securities pricing. In addition, tests of bid-ask behavior does not indicate any significant increases in information asymmetry surrounding these announcements. While the CMA has developed many goals to increase the awareness of listed companies with the best governance and disclosure practices, it seems they have to develop more goals to improve market efficiency and increase investors and public awareness.

Keywords: governance and disclosure violations, financial reporting quality, regulatory intervention, market efficiency

Procedia PDF Downloads 293