Search results for: theoretical capacity model
1566 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 931565 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities
Authors: Kung-Jen Tu, Danny Vernatha
Abstract:
To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.Keywords: database, electricity sub-meters, energy anomaly detection, sensor
Procedia PDF Downloads 3081564 Milling Simulations with a 3-DOF Flexible Planar Robot
Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden
Abstract:
Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.Keywords: control, milling, multibody, robotic, simulation
Procedia PDF Downloads 2491563 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty
Authors: Ben Khayut, Lina Fabri, Maya Avikhana
Abstract:
The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.Keywords: computational brain, mind, psycholinguistic, system, under uncertainty
Procedia PDF Downloads 1801562 The Political Economy of Green Trade in the Context of US-China Trade War: A Case Study of US Biofuels and Soybeans
Authors: Tonghua Li
Abstract:
Under the neoliberal corporate food regime, biofuels are a double-edged sword that exacerbates tensions between national food security and trade in green agricultural products. Biofuels have the potential to help achieve green sustainable development goals, but they threaten food security by exacerbating competition for land and changing global food trade patterns. The U.S.-China trade war complicates this debate. Under the influence of different political and corporate coordination mechanisms in China and the US, trade disputes can have different impacts on sustainable agricultural practices. This paper develops an actor-centred ‘network governance framework’ focusing on trade in soybean and corn-based biofuels to explain how trade wars can change the actions of governmental and non-governmental actors in the context of oligopolistic competition and market concentration in agricultural trade. There is evidence that the US-China trade decoupling exacerbates the conflict between national security, free trade in agriculture, and the realities and needs of green and sustainable energy development. The US government's trade policies reflect concerns about China's relative gains, leading to a loss of trade profits, making it impossible for the parties involved to find a balance between the three objectives and, consequently, to get into a biofuels and soybean industry dilemma. Within the setting of prioritizing national security and strategic interests, the government has replaced the dominant position of large agribusiness in the neoliberal food system, and the goal of environmental sustainability has been marginalized by high politics. In contrast, China faces tensions in the trade war between food security self-sufficiency policy and liberal sustainable trade, but the state-capitalist model ensures policy coordination and coherence in trade diversion and supply chain adjustment. Despite ongoing raw material shortages and technological challenges, China remains committed to playing a role in global environmental governance and promoting green trade objectives.Keywords: food security, green trade, biofuels, soybeans, US-China trade war
Procedia PDF Downloads 101561 Transforming Data Science Curriculum Through Design Thinking
Authors: Samar Swaid
Abstract:
Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.Keywords: data science, design thinking, AI, currculum, transformation
Procedia PDF Downloads 821560 Determinants of Hospital Obstetric Unit Closures in the United States 2002-2013: Loss of Hospital Obstetric Care 2002-2013
Authors: Peiyin Hung, Katy Kozhimannil, Michelle Casey, Ira Moscovice
Abstract:
Background/Objective: The loss of obstetric services has been a pressing concern in urban and rural areas nationwide. This study aims to determine factors that contribute to the loss of obstetric care through closures of a hospital or obstetric unit. Methods: Data from 2002-2013 American Hospital Association annual surveys were used to identify hospitals providing obstetric services. We linked these data to Medicare Healthcare Cost Report Information for hospital financial indicators, the US Census Bureau’s American Community Survey for zip-code level characteristics, and Area Health Resource files for county- level clinician supply measures. A discrete-time multinomial logit model was used to determine contributing factors to obstetric unit or hospital closures. Results: Of 3,551 hospitals providing obstetrics services during 2002-2013, 82% kept units open, 12% stopped providing obstetrics services, and 6% closed down completely. State-level variations existed. Factors that significantly increased hospitals’ probability of obstetric unit closures included lower than 250 annual birth volume (adjusted marginal effects [95% confidence interval]=34.1% [28%, 40%]), closer proximity to another hospital with obstetric services (per 10 miles: -1.5% [-2.4, -0.5%]), being in a county with lower family physician supply (-7.8% [-15.0%, -0.6%), being in a zip code with higher percentage of non-white females (per 10%: 10.2% [2.1%, 18.3%]), and with lower income (per $1,000 income: -0.14% [-0.28%, -0.01%]). Conclusions: Over the past 12 years, loss of obstetric services has disproportionately affected areas served by low-volume urban and rural hospitals, non-white and low-income communities, and counties with fewer family physicians, signaling a need to address maternity care access in these communities.Keywords: access to care, obstetric care, service line discontinuation, hospital, obstetric unit closures
Procedia PDF Downloads 2221559 Characterization of the Groundwater Aquifers at El Sadat City by Joint Inversion of VES and TEM Data
Authors: Usama Massoud, Abeer A. Kenawy, El-Said A. Ragab, Abbas M. Abbas, Heba M. El-Kosery
Abstract:
Vertical Electrical Sounding (VES) and Transient Electro Magnetic (TEM) survey have been applied for characterizing the groundwater aquifers at El Sadat industrial area. El-Sadat city is one of the most important industrial cities in Egypt. It has been constructed more than three decades ago at about 80 km northwest of Cairo along the Cairo–Alexandria desert road. Groundwater is the main source of water supplies required for domestic, municipal, and industrial activities in this area due to the lack of surface water sources. So, it is important to maintain this vital resource in order to sustain the development plans of this city. In this study, VES and TEM data were identically measured at 24 stations along three profiles trending NE–SW with the elongation of the study area. The measuring points were arranged in a grid like pattern with both inter-station spacing and line–line distance of about 2 km. After performing the necessary processing steps, the VES and TEM data sets were inverted individually to multi-layer models, followed by a joint inversion of both data sets. Joint inversion process has succeeded to overcome the model-equivalence problem encountered in the inversion of individual data set. Then, the joint models were used for the construction of a number of cross sections and contour maps showing the lateral and vertical distribution of the geo-electrical parameters in the subsurface medium. Interpretation of the obtained results and correlation with the available geological and hydrogeological information revealed TWO aquifer systems in the area. The shallow Pleistocene aquifer consists of sand and gravel saturated with fresh water and exhibits large thickness exceeding 200 m. The deep Pliocene aquifer is composed of clay and sand and shows low resistivity values. The water bearing layer of the Pleistocene aquifer and the upper surface of Pliocene aquifer are continuous and no structural features have cut this continuity through the investigated area.Keywords: El Sadat city, joint inversion, VES, TEM
Procedia PDF Downloads 3701558 Understanding the Information in Principal Component Analysis of Raman Spectroscopic Data during Healing of Subcritical Calvarial Defects
Authors: Rafay Ahmed, Condon Lau
Abstract:
Bone healing is a complex and sequential process involving changes at the molecular level. Raman spectroscopy is a promising technique to study bone mineral and matrix environments simultaneously. In this study, subcritical calvarial defects are used to study bone composition during healing without discomposing the fracture. The model allowed to monitor the natural healing of bone avoiding mechanical harm to the callus. Calvarial defects were created using 1mm burr drill in the parietal bones of Sprague-Dawley rats (n=8) that served in vivo defects. After 7 days, their skulls were harvested after euthanizing. One additional defect per sample was created on the opposite parietal bone using same calvarial defect procedure to serve as control defect. Raman spectroscopy (785 nm) was established to investigate bone parameters of three different skull surfaces; in vivo defects, control defects and normal surface. Principal component analysis (PCA) was utilized for the data analysis and interpretation of Raman spectra and helped in the classification of groups. PCA was able to distinguish in vivo defects from normal surface and control defects. PC1 shows that the major variation at 958 cm⁻¹, which corresponds to ʋ1 phosphate mineral band. PC2 shows the major variation at 1448 cm⁻¹ which is the characteristic band of CH2 deformation and corresponds to collagens. Raman parameters, namely, mineral to matrix ratio and crystallinity was found significantly decreased in the in vivo defects compared to surface and controls. Scanning electron microscope and optical microscope images show the formation of newly generated matrix by means of bony bridges of collagens. Optical profiler shows that surface roughness increased by 30% from controls to in vivo defects after 7 days. These results agree with Raman assessment parameters and confirm the new collagen formation during healing.Keywords: Raman spectroscopy, principal component analysis, calvarial defects, tissue characterization
Procedia PDF Downloads 2231557 Investigation of Aerodynamic and Design Features of Twisting Tall Buildings
Authors: Sinan Bilgen, Bekir Ozer Ay, Nilay Sezer Uzol
Abstract:
After decades of conventional shapes, irregular forms with complex geometries are getting more popular for form generation of tall buildings all over the world. This trend has recently brought out diverse building forms such as twisting tall buildings. This study investigates both the aerodynamic and design features of twisting tall buildings through comparative analyses. Since twisting a tall building give rise to additional complexities related with the form and structural system, lateral load effects become of greater importance on these buildings. The aim of this study is to analyze the inherent characteristics of these iconic forms by comparing the wind loads on twisting tall buildings with those on their prismatic twins. Through a case study research, aerodynamic analyses of an existing twisting tall building and its prismatic counterpart were performed and the results have been compared. The prismatic twin of the original building were generated by removing the progressive rotation of its floors with the same plan area and story height. Performance-based measures under investigation have been evaluated in conjunction with the architectural design. Aerodynamic effects have been analyzed by both wind tunnel tests and computational methods. High frequency base balance tests and pressure measurements on 3D models were performed to evaluate wind load effects on a global and local scale. Comparisons of flat and real surface models were conducted to further evaluate the effects of the twisting form without façade texture contribution. Comparisons highlighted that, the twisting form under investigation shows better aerodynamic behavior both for along wind but particularly for across wind direction. Compared to the prismatic counterpart; twisting model is superior on reducing vortex-shedding dynamic response by disorganizing the wind vortices. Consequently, despite the difficulties arisen from inherent complexity of twisted forms, they could still be feasible and viable with their attractive images in the realm of tall buildings.Keywords: aerodynamic tests, motivation for twisting, tall buildings, twisted forms, wind excitation
Procedia PDF Downloads 2361556 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber
Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap
Abstract:
Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber
Procedia PDF Downloads 1651555 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy
Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu
Abstract:
Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR
Procedia PDF Downloads 691554 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow
Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam
Abstract:
Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety
Procedia PDF Downloads 3051553 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity
Authors: Shivdayal Patel, Suhail Ahmad
Abstract:
Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling
Procedia PDF Downloads 2791552 Design and Analysis for a 4-Stage Crash Energy Management System for Railway Vehicles
Authors: Ziwen Fang, Jianran Wang, Hongtao Liu, Weiguo Kong, Kefei Wang, Qi Luo, Haifeng Hong
Abstract:
A 4-stage crash energy management (CEM) system for subway rail vehicles used by Massachusetts Bay Transportation Authority (MBTA) in the USA is developed in this paper. The 4 stages of this new CEM system include 1) energy absorbing coupler (draft gear and shear bolts), 2) primary energy absorbers (aluminum honeycomb structured box), 3) secondary energy absorbers (crush tube), and 4) collision post and corner post. A sliding anti-climber and a fixed anti-climber are designed at the front of the vehicle cooperating with the 4-stage CEM to maximize the energy to be absorbed and minimize the damage to passengers and crews. In order to investigate the effectiveness of this CEM system, both finite element (FE) methods and crashworthiness test have been employed. The whole vehicle consists of 3 married pairs, i.e., six cars. In the FE approach, full-scale railway car models are developed and different collision cases such as a single moving car impacting a rigid wall, two moving cars into a rigid wall, two moving cars into two stationary cars, six moving cars into six stationary cars and so on are investigated. The FE analysis results show that the railway vehicle incorporating this CEM system has a superior crashworthiness performance. In the crashworthiness test, a simplified vehicle front end including the sliding anti-climber, the fixed anti-climber, the primary energy absorbers, the secondary energy absorber, the collision post and the corner post is built and impacted to a rigid wall. The same test model is also analyzed in the FE and the results such as crushing force, stress, and strain of critical components, acceleration and velocity curves are compared and studied. FE results show very good comparison to the test results.Keywords: railway vehicle collision, crash energy management design, finite element method, crashworthiness test
Procedia PDF Downloads 4041551 A New Co(II) Metal Complex Template with 4-dimethylaminopyridine Organic Cation: Structural, Hirshfeld Surface, Phase Transition, Electrical Study and Dielectric Behavior
Authors: Mohamed dammak
Abstract:
Great attention has been paid to the design and synthesis of novel organic-inorganic compounds in recent decades because of their structural variety and the large diversity of atomic arrangements. In this work, the structure for the novel dimethyl aminopyridine tetrachlorocobaltate (C₇H₁₁N₂)₂CoCl₄ prepared by the slow evaporation method at room temperature has been successfully discussed. The X-ray diffraction results indicate that the hybrid material has a triclinic structure with a P space group and features a 0D structure containing isolated distorted [CoCl₄]2- tetrahedra interposed between [C7H11N²⁻]+ cations forming planes perpendicular to the c axis at z = 0 and z = ½. The effect of the synthesis conditions and the reactants used, the interactions between the cationic planes, and the isolated [CoCl4]2- tetrahedra are employing N-H...Cl and C-H…Cl hydrogen bonding contacts. The inspection of the Hirshfeld surface analysis helps to discuss the strength of hydrogen bonds and to quantify the inter-contacts. A phase transition was discovered by thermal analysis at 390 K, and comprehensive dielectric research was reported, showing a good agreement with thermal data. Impedance spectroscopy measurements were used to study the electrical and dielectric characteristics over a wide range of frequencies and temperatures, 40 Hz–10 MHz and 313–483 K, respectively. The Nyquist plot (Z" versus Z') from the complex impedance spectrum revealed semicircular arcs described by a Cole-Cole model. An electrical circuit consisting of a link of grain and grain boundary elements is employed. The real and imaginary parts of dielectric permittivity, as well as tg(δ) of (C₇H₁₁N₂)₂CoCl₄ at different frequencies, reveal a distribution of relaxation times. The presence of grain and grain boundaries is confirmed by the modulus investigations. Electric and dielectric analyses highlight the good protonic conduction of this material.Keywords: organic-inorganic, phase transitions, complex impedance, protonic conduction, dielectric analysis
Procedia PDF Downloads 851550 Contrasted Mean and Median Models in Egyptian Stock Markets
Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid
Abstract:
Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming
Procedia PDF Downloads 3151549 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 3131548 Physical Activity Self-Efficacy among Pregnant Women with High Risk for Gestational Diabetes Mellitus: A Cross-Sectional Study
Authors: Xiao Yang, Ji Zhang, Yingli Song, Hui Huang, Jing Zhang, Yan Wang, Rongrong Han, Zhixuan Xiang, Lu Chen, Lingling Gao
Abstract:
Aim and Objectives: To examine physical activity self-efficacy, identify its predictors, and further explore the mechanism of action among the predictors in mainland Chinese pregnant women with high risk for gestational diabetes mellitus (GDM). Background: Physical activity could protect pregnant women from developing GDM. Physical activity self-efficacy was the key predictor of physical activity. Design: A cross-sectional study was conducted from October 2021 to May 2022 in Zhengzhou, China. Methods: 252 eligible pregnant women completed the Pregnancy Physical Activity Self-efficacy Scale, the Social Support for Physical Activity Scale, the Knowledge on Physical Activity Questionnaire, the 7-item Generalized Anxiety Disorder scale, the Edinburgh Postnatal Depression Scale, and a socio-demographic data sheet. Multiple linear regression was applied to explore the predictors of physical activity self-efficacy. Structural equation modeling was used to explore the mechanism of action among the predictors. Results: Chinese pregnant women with a high risk for GDM reported a moderate level of physical activity self-efficacy. The best-fit regression analysis revealed four variables explained 17.5% of the variance in physical activity self-efficacy. Social support for physical activity was the strongest predictor, followed by knowledge of the physical activity, intention to do physical activity, and anxiety symptoms. The model analysis indicated that knowledge of physical activity could release anxiety and depressive symptoms and then increase physical activity self-efficacy. Conclusion: The present study revealed a moderate level of physical activity self-efficacy. Interventions targeting pregnant women with high risk for GDM need to include the predictors of physical activity self-efficacy. Relevance to clinical practice: To facilitate pregnant women with high risk for GDM to engage in physical activity, healthcare professionals may find assess physical activity self-efficacy and intervene as soon as possible on their first antenatal visit. Physical activity intervention programs focused on self-efficacy may be conducted in further research.Keywords: physical activity, gestational diabetes, self-efficacy, predictors
Procedia PDF Downloads 1041547 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity
Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois
Abstract:
With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation
Procedia PDF Downloads 3281546 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil
Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins
Abstract:
Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.Keywords: biomonitoring, exposure, PBPK modelling, toxic elements
Procedia PDF Downloads 3211545 Microfluidic Device for Real-Time Electrical Impedance Measurements of Biological Cells
Authors: Anil Koklu, Amin Mansoorifar, Ali Beskok
Abstract:
Dielectric spectroscopy (DS) is a noninvasive, label free technique for a long term real-time measurements of the impedance spectra of biological cells. DS enables characterization of cellular dielectric properties such as membrane capacitance and cytoplasmic conductivity. We have developed a lab-on-a-chip device that uses an electro-activated microwells array for loading, DS measurements, and unloading of biological cells. We utilized from dielectrophoresis (DEP) to capture target cells inside the wells and release them after DS measurement. DEP is a label-free technique that exploits differences among dielectric properties of the particles. In detail, DEP is the motion of polarizable particles suspended in an ionic solution and subjected to a spatially non-uniform external electric field. To the best of our knowledge, this is the first microfluidic chip that combines DEP and DS to analyze biological cells using electro-activated wells. Device performance is tested using two different cell lines of prostate cancer cells (RV122, PC-3). Impedance measurements were conducted at 0.2 V in the 10 kHz to 40 MHz range with 6 s time resolution. An equivalent circuit model was developed to extract the cell membrane capacitance and cell cytoplasmic conductivity from the impedance spectra. We report the time course of the variations in dielectric properties of PC-3 and RV122 cells suspended in low conductivity medium (LCB), which enhances dielectrophoretic and impedance responses, and their response to sudden pH change from a pH of 7.3 to a pH of 5.8. It is shown that microfluidic chip allowed online measurements of dielectric properties of prostate cancer cells and the assessment of the cellular level variations under external stimuli such as different buffer conductivity and pH. Based on these data, we intend to deploy the current device for single cell measurements by fabricating separately addressable N × N electrode platforms. Such a device will allow time-dependent dielectric response measurements for individual cells with the ability of selectively releasing them using negative-DEP and pressure driven flow.Keywords: microfluidic, microfabrication, lab on a chip, AC electrokinetics, dielectric spectroscopy
Procedia PDF Downloads 1521544 Relevance of the Judgements Given by the International Court of Justice with Regard to South China Sea Vis-A-Vis Marshall Islands
Authors: Hitakshi Mahendru, Advait Tambe, Simran Chandok, Niharika Sanadhya
Abstract:
After the Second World War had come to an end, the Founding Fathers of the United Nations recognized a need for a supreme peacekeeping mechanism to act as a mediator between nations and moderate disputes that might blow up, if left unchecked. It has been more than seven decades since the establishment of the International Court of Justice (ICJ). When it was created, there were certain aim and objectives that the ICJ was intended to achieve. However, in today’s world, with change in political dynamics and international relations between countries, the ICJ has not succeeded in achieving several of these objectives. The ICJ is the only body in the international scenario that has the authority to regulate disputes between countries. However, in recent times, with countries like China disregarding the importance of the ICJ, there is no hope for the ICJ to command respect from other nations, thereby sending ICJ on a slow, yet steady path towards redundancy. The authority of the judgements given by the International Court of Justice, which is one of the main pillars of the United Nations, is questionable due to the forthcoming reactions from various countries on public platforms. The ICJ’s principal role within the United Nations framework is to settle peacefully international/bilateral disputes between the states that come under its jurisdiction and in accordance with the principles laid down in international law. By shedding light on the public backlash from the Chinese Government to the recent South China Sea judgement, we see the decreasing relevance of the ICJ in the contemporary world scenario. Philippines and China have wrangled over territory in the South China Sea for centuries but after the recent judgement the tension has reached an all-time high with China threatening to prosecute anybody as trespassers while continuing to militarise the disputed area. This paper will deal with the South China Sea judgement and the manner in which it has been received by the Chinese Government. Also, it will look into the consequences of counter-back. The authors will also look into the Marshall Island matter and propose a model judgement, in accordance with the principles of international law that would be the most suited for the given situation. Also, the authors will propose amendments in the working of the Security Council to ensure that the Marshal Island judgement is passed and accepted by the countries without any contempt.Keywords: International Court of Justice, international law, Marshall Islands, South China Sea, United Nations Charter
Procedia PDF Downloads 2981543 A Dissipative Particle Dynamics Study of a Capsule in Microfluidic Intracellular Delivery System
Authors: Nishanthi N. S., Srikanth Vedantam
Abstract:
Intracellular delivery of materials has always proved to be a challenge in research and therapeutic applications. Usually, vector-based methods, such as liposomes and polymeric materials, and physical methods, such as electroporation and sonoporation have been used for introducing nucleic acids or proteins. Reliance on exogenous materials, toxicity, off-target effects was the short-comings of these methods. Microinjection was an alternative process which addressed the above drawbacks. However, its low throughput had hindered its adoption widely. Mechanical deformation of cells by squeezing them through constriction channel can cause the temporary development of pores that would facilitate non-targeted diffusion of materials. Advantages of this method include high efficiency in intracellular delivery, a wide choice of materials, improved viability and high throughput. This cell squeezing process can be studied deeper by employing simple models and efficient computational procedures. In our current work, we present a finite sized dissipative particle dynamics (FDPD) model to simulate the dynamics of the cell flowing through a constricted channel. The cell is modeled as a capsule with FDPD particles connected through a spring network to represent the membrane. The total energy of the capsule is associated with linear and radial springs in addition to constraint of the fixed area. By performing detailed simulations, we studied the strain on the membrane of the capsule for channels with varying constriction heights. The strain on the capsule membrane was found to be similar though the constriction heights vary. When strain on the membrane was correlated to the development of pores, we found higher porosity in capsule flowing in wider channel. This is due to localization of strain to a smaller region in the narrow constriction channel. But the residence time of the capsule increased as the channel constriction narrowed indicating that strain for an increased time will cause less cell viability.Keywords: capsule, cell squeezing, dissipative particle dynamics, intracellular delivery, microfluidics, numerical simulations
Procedia PDF Downloads 1411542 Assessing the Impacts of Riparian Land Use on Gully Development and Sediment Load: A Case Study of Nzhelele River Valley, Limpopo Province, South Africa
Authors: B. Mavhuru, N. S. Nethengwe
Abstract:
Human activities on land degradation have triggered several environmental problems especially in rural areas that are underdeveloped. The main aim of this study is to analyze the contribution of different land uses to gully development and sediment load on the Nzhelele River Valley in the Limpopo Province. Data was collected using different methods such as observation, field data techniques and experiments. Satellite digital images, topographic maps, aerial photographs and the sediment load static model also assisted in determining how land use affects gully development and sediment load. For data analysis, the researcher used the following methods: Analysis of Variance (ANOVA), descriptive statistics, Pearson correlation coefficient and statistical correlation methods. The results of the research illustrate that high land use activities create negative changes especially in areas that are highly fragile and vulnerable. Distinct impact on land use change was observed within settlement area (9.6 %) within a period of 5 years. High correlation between soil organic matter and soil moisture (R=0.96) was observed. Furthermore, a significant variation (p ≤ 0.6) between the soil organic matter and soil moisture was also observed. A very significant variation (p ≤ 0.003) was observed in bulk density and extreme significant variations (p ≤ 0.0001) were observed in organic matter and soil particle size. The sand mining and agricultural activities has contributed significantly to the amount of sediment load in the Nzhelele River. A high significant amount of total suspended sediment (55.3 %) and bed load (53.8 %) was observed within the agricultural area. The connection which associates the development of gullies to various land use activities determines the amount of sediment load. These results are consistent with other previous research and suggest that land use activities are likely to exacerbate the development of gullies and sediment load in the Nzhelele River Valley.Keywords: drainage basin, geomorphological processes, gully development, land degradation, riparian land use and sediment load
Procedia PDF Downloads 3081541 Optimization of Alkali Assisted Microwave Pretreatments of Sorghum Straw for Efficient Bioethanol Production
Authors: Bahiru Tsegaye, Chandrajit Balomajumder, Partha Roy
Abstract:
The limited supply and related negative environmental consequence of fossil fuels are driving researcher for finding sustainable sources of energy. Lignocellulose biomass like sorghum straw is considered as among cheap, renewable and abundantly available sources of energy. However, lignocellulose biomass conversion to bioenergy like bioethanol is hindered due to the reluctant nature of lignin in the biomass. Therefore, removal of lignin is a vital step for lignocellulose conversion to renewable energy. The aim of this study is to optimize microwave pretreatment conditions using design expert software to remove lignin and to release maximum possible polysaccharides from sorghum straw for efficient hydrolysis and fermentation process. Sodium hydroxide concentration between 0.5-1.5%, v/v, pretreatment time from 5-25 minutes and pretreatment temperature from 120-2000C were considered to depolymerize sorghum straw. The effect of pretreatment was studied by analyzing the compositional changes before and after pretreatments following renewable energy laboratory procedure. Analysis of variance (ANOVA) was used to test the significance of the model used for optimization. About 32.8%-48.27% of hemicellulose solubilization, 53% -82.62% of cellulose release, and 49.25% to 78.29% lignin solubilization were observed during microwave pretreatment. Pretreatment for 10 minutes with alkali concentration of 1.5% and temperature of 1400C released maximum cellulose and lignin. At this optimal condition, maximum of 82.62% of cellulose release and 78.29% of lignin removal was achieved. Sorghum straw at optimal pretreatment condition was subjected to enzymatic hydrolysis and fermentation. The efficiency of hydrolysis was measured by analyzing reducing sugars by 3, 5 dinitrisylicylic acid method. Reducing sugars of about 619 mg/g of sorghum straw were obtained after enzymatic hydrolysis. This study showed a significant amount of lignin removal and cellulose release at optimal condition. This enhances the yield of reducing sugars as well as ethanol yield. The study demonstrates the potential of microwave pretreatments for enhancing bioethanol yield from sorghum straw.Keywords: cellulose, hydrolysis, lignocellulose, optimization
Procedia PDF Downloads 2721540 Corporate Sustainability Practices in Asian Countries: Pattern of Disclosure and Impact on Financial Performance
Authors: Santi Gopal Maji, R. A. J. Syngkon
Abstract:
The changing attitude of the corporate enterprises from maximizing economic benefit to corporate sustainability after the publication of Brundtland Report has attracted the interest of researchers to investigate the sustainability practices of firms and its impact on financial performance. To enrich the empirical literature in Asian context, this study examines the disclosure pattern of corporate sustainability and the influence of sustainability reporting on financial performance of firms from four Asian countries (Japan, South Korea, India and Indonesia) that are publishing sustainability report continuously from 2009 to 2016. The study has used content analysis technique based on Global Reporting Framework (3 and 3.1) reporting framework to compute the disclosure score of corporate sustainability and its components. While dichotomous coding system has been employed to compute overall quantitative disclosure score, a four-point scale has been used to access the quality of the disclosure. For analysing the disclosure pattern of corporate sustainability, box plot has been used. Further, Pearson chi-square test has been used to examine whether there is any difference in the proportion of disclosure between the countries. Finally, quantile regression model has been employed to examine the influence of corporate sustainability reporting on the difference locations of the conditional distribution of firm performance. The findings of the study indicate that Japan has occupied first position in terms of disclosure of sustainability information followed by South Korea and India. In case of Indonesia, the quality of disclosure score is considerably less as compared to other three countries. Further, the gap between the quality and quantity of disclosure score is comparatively less in Japan and South Korea as compared to India and Indonesia. The same is evident in respect of the components of sustainability. The results of quantile regression indicate that a positive impact of corporate sustainability becomes stronger at upper quantiles in case of Japan and South Korea. But the study fails to extricate any definite pattern on the impact of corporate sustainability disclosure on the financial performance of firms from Indonesia and India.Keywords: corporate sustainability, quality and quantity of disclosure, content analysis, quantile regression, Asian countries
Procedia PDF Downloads 1951539 The Emerging Multi-Species Trap Fishery in the Red Sea Waters of Saudi Arabia
Authors: Nabeel M. Alikunhi, Zenon B. Batang, Aymen Charef, Abdulaziz M. Al-Suwailem
Abstract:
Saudi Arabia has a long history of using traps as a traditional fishing gear for catching commercially important demersal, mainly coral reef-associated fish species. Fish traps constitute the dominant small-scale fisheries in Saudi waters of Arabian Gulf (eastern seaboard of Saudi Arabia). Recently, however, traps have been increasingly used along the Saudi Red Sea coast (western seaboard), with a coastline of 1800 km (71%) compared to only 720 km (29%) in the Saudi Gulf region. The production trend for traps indicates a recent increase in catches and percent contribution to traditional fishery landings, thus ascertaining the rapid proliferation of trap fishing along the Saudi Red Sea coast. Reef-associated fish species, mainly groupers (Serranidae), emperors (Lethrinidae), parrotfishes (Scaridae), scads and trevallies (Carangidae), and snappers (Lutjanidae), dominate the trap catches, reflecting the reef-dominated shelf zone in the Red Sea. This ongoing investigation covers following major objectives (i) Baseline studies to characterize trap fishery through landing site visit and interview surveys (ii) Stock assessment by fisheries and biological data obtained through monthly landing site monitoring using fishery operational model by FLBEIA, (iii) Operational impacts, derelict traps assessment and by-catch analysis through bottom-mounted video camera and onboard monitoring (iv) Elucidation of fishing grounds and derelict traps impacts by onboard monitoring, Remotely Operated underwater Vehicle and Autonomous Underwater Vehicle surveys; and (v) Analysis of gear design and operations which covers colonization and deterioration experiments. The progress of this investigation on the impacts of the trap fishery on fish stocks and the marine environment in the Saudi Red Sea region is presented.Keywords: red sea, Saudi Arabia, fish trap, stock assessment, environmental impacts
Procedia PDF Downloads 3501538 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 1571537 Influence of Cobalt Incorporation on the Structure and Properties of SOL-Gel Derived Mesoporous Bioglass Nanoparticles
Authors: Ahmed El-Fiqi, Hae-Won Kim
Abstract:
Incorporation of therapeutic elements such as Sr, Cu and Co into bioglass structure and their release as ions is considered as one of the promising approaches to enhance cellular responses, e.g., osteogenesis and angiogenesis. Here, cobalt as angiogenesis promoter has been incorporated (at 0, 1 and 4 mol%) into sol-gel derived calcium silicate mesoporous bioglass nanoparticles. The composition and structure of cobalt-free (CFN) and cobalt-doped (CDN) mesoporous bioglass nanoparticles have been analyzed by X-ray fluorescence (XRF), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and Fourier-Transform Infra-red spectroscopy (FT-IR). The physicochemical properties of CFN and CDN have been investigated using high-resolution transmission electron microscopy (HR-TEM), Selected area electron diffraction (SAED), and Energy-dispersive X-ray (EDX). Furthermore, the textural properties, including specific surface area, pore-volume, and pore size, have been analyzed from N²⁻sorption analyses. Surface charges of CFN and CDN were also determined from surface zeta potential measurements. The release of ions, including Co²⁺, Ca²⁺, and SiO₄⁴⁻ has been analyzed using inductively coupled plasma atomic emission spectrometry (ICP-AES). Loading and release of diclofenac as an anti-inflammatory drug model were explored in vitro using Ultraviolet-visible spectroscopy (UV-Vis). XRD results ensured the amorphous state of CFN and CDN whereas, XRF further confirmed that their chemical compositions are very close to the designed compositions. HR-TEM analyses unveiled nanoparticles with spherical morphologies, highly mesoporous textures, and sizes in the range of 90 - 100 nm. Moreover, N²⁻ sorption analyses revealed that the nanoparticles have pores with sizes of 3.2 - 2.6 nm, pore volumes of 0.41 - 0.35 cc/g and highly surface areas in the range of 716 - 830 m²/g. High-resolution XPS analysis of Co 2p core level provided structural information about Co atomic environment and it confirmed the electronic state of Co in the glass matrix. ICP-AES analysis showed the release of therapeutic doses of Co²⁺ ions from 4% CDN up to 100 ppm within 14 days. Finally, diclofenac loading and release have ensured the drug/ion co-delivery capability of 4% CDN.Keywords: mesoporous bioactive glass, nanoparticles, cobalt ions, release
Procedia PDF Downloads 108