Search results for: trade gravity model
1812 Caspase-11 and AIM2 Inflammasome are Involved in Smoking-Induced COPD and Lung Adenocarcinoma
Authors: Chiara Colarusso, Michela Terlizzi, Aldo Pinto, Rosalinda Sorrentino
Abstract:
Cigarette smoking is the main cause and the most common risk factor for both COPD and lung cancer. In our previous studies, we proved that caspase-11 in mice and its human analogue, caspase-4, are involved in lung carcinogenesis and that AIM2 inflammasome might play a pro-cancerous role in lung cancer. Therefore, the aim of this study was to investigate potential crosstalk between COPD and lung cancer, focusing on AIM2 and caspase-11-dependent inflammasome signaling pathway. To mimic COPD, we took advantage of an experimental first-hand smoking mouse model and, to confirm what was observed in mice, we used human samples of lung adenocarcinoma patients stratified according to the smoking and COPD status. We demonstrated that smoke exposure led to emphysema-like features, bronchial tone impairment, and release of IL-1-like cytokines (IL-1α, IL-1β, IL-33, IL-18) in a caspase-1 independent manner in C57Bl/6N. Rather, a dysfunctional caspase-11 in smoke-exposed 129Sv mice was associated to lower bronchial inflammation, collagen deposition, and IL-1-like inflammation. In addition, for the first time, we found that AIM2 inflammasome is involved in lung inflammation in smoking and COPD, in that its expression was higher in smoke-exposed C57Bl/6N compared to 129Sv smoking mice, who instead did not show any alteration of AIM2 in both macrophages and dendritic cells. Moreover, we found that AIM2 expression in the cancerous tissue, albeit higher than non-cancerous tissue, was not statistically different according to the COPD and smoking status. Instead, the higher expression of AIM2 in non-cancerous tissue of smoker COPD patients than smokers who did not have COPD was correlated to a higher hazard ratio of poor survival rate than patients who presented lower levels of AIM2. In conclusion, our data highlight that caspase-11 in mice is associated to smoke-induced lung latent inflammation which could drive the establishment of lung cancer, and that AIM2 inflammasome plays a role at the crosstalk between smoking/COPD and lung adenocarcinoma in that its higher presence is correlated to lower survival rate of smoker COPD adenocarcinoma.Keywords: COPD, inflammasome, lung cancer, lung inflammation, smoke
Procedia PDF Downloads 1571811 Implications for Counseling and Service Delivery on the Psychological Trajectories of Women Undergoing in Vitro Fertilization (IVF) Treatment in Hong Kong
Authors: Tong Mei Yan
Abstract:
Introduction: The experience of infertility could be excruciating but has not received much attention in Hong Kong. The strong Confucian culture pressures couples to continue their family lineage resulting in women facing more stress than men in the social-cultural milieu. In Vitro Fertilization (IVF) treatment is one of the common ways to deal with the problem. Abundant literature exists the psychological trajectories of people receiving IVF treatment in Europe, the USA and other east Asian societies but not in Hong Kong. Aim: This study aims to highlight the circumstances and needs of the women before, during and after IVF treatment through examining their lived experiences. It is hoped that the public, once informed of regarding their tribulations and needs , would support the adequate provision of the required psychological support . Methods: Qualitative analysis was adopted in this study. In-depth interviews were conducted with six women who have undergone at least one complete cycle of IVF treatment within the past five years. Data was analyzed through thematic analysis and narrative analysis. Results: 4 broad themes were found, including (i) emotional responses; (ii) experiences in medical consultation; (iii) impacts of the treatment; and (iv) their coping strategies. Additionally, specific events in three cases were chosen for narrative analysis to further examine their unresolved emotional distress and the ethical issues. Conclusion: IVF treatment distressed the interviewees immensely, both physically and psychologically, with the negative emotions outweighing their physical strains, a result unexpected by all of the interviewees. The pressure for lineage continuation, the demanding treatment process and the dearth of support from health professionals all contribute to their emotional pain which could linger for both successful and unsuccessful cases. Although a number of coping strategies were attempted, most of them completely failed to ease their psychological tension. The findings of this study therefore evidence the need for psychological support for this population. A service model to cater their needs of IVF treatment before, during and after treatment is therefore proposed.Keywords: coping strategies, emotional experiences, impacts of IVF, infertility, IVF treatment, medical experiences
Procedia PDF Downloads 891810 Using Geographic Information System and Analytic Hierarchy Process for Detecting Forest Degradation in Benslimane Forest, Morocco
Authors: Loubna Khalile, Hicham Lahlaoi, Hassan Rhinane, A. Kaoukaya, S. Fal
Abstract:
Green spaces is an essential element, they contribute to improving the quality of lives of the towns around them. They are a place of relaxation, walk and rest a playground for sport and youths. According to United Nations Organization Forests cover 31% of the land. In Morocco in 2013 that cover 12.65 % of the total land area, still, a small proportion compared to the natural needs of forests as a green lung of our planet. The Benslimane Forest is a large green area It belongs to Chaouia-Ouardigha Region and Greater Casablanca Region, it is located geographically between Casablanca is considered the economic and business Capital of Morocco and Rabat the national political capital, with an area of 12261.80 Hectares. The essential problem usually encountered in suburban forests, is visitation and tourism pressure it is anthropogenic actions, as well as other ecological and environmental factors. In recent decades, Morocco has experienced a drought year that has influenced the forest with increasing human pressure and every day it suffers heavy losses, as well as over-exploitation. The Moroccan forest ecosystems are weak with intense ecological variation, domanial and imposed usage rights granted to the population; forests are experiencing a significant deterioration due to forgetfulness and immoderate use of forest resources which can influence the destruction of animal habitats, vegetation, water cycle and climate. The purpose of this study is to make a model of the degree of degradation of the forest and know the causes for prevention by using remote sensing and geographic information systems by introducing climate and ancillary data. Analytic hierarchy process was used to find out the degree of influence and the weight of each parameter, in this case, it is found that anthropogenic activities have a fairly significant impact has thus influenced the climate.Keywords: analytic hierarchy process, degradation, forest, geographic information system
Procedia PDF Downloads 3301809 Insecurity and Insurgency on Economic Development of Nigeria
Authors: Uche Lucy Onyekwelu, Uche B. Ugwuanyi
Abstract:
Suffice to say that socio-economic disruptions of any form is likely to affect the wellbeing of the citizenry. The upsurge of social disequilibrium caused by the incessant disruptive tendencies exhibited by youths and some others in Nigeria are not helping matters. In Nigeria the social unrest has caused different forms of draw backs in Socio Economic Development. This study has empirically evaluated the impact of insecurity and insurgency on the Economic Development of Nigeria. The paper noted that the different forms of insecurity in Nigeria are namely: Insurgency and Banditry as witnessed in Northern Nigeria; Militancy: Niger Delta area and self-determination groups pursuing various forms of agenda such as Sit –at- Home Syndrome in the South Eastern Nigeria and other secessionist movements. All these have in one way or the other hampered Economic development in Nigeria. Data for this study were collected through primary and secondary sources using questionnaire and some existing documentations. Cost of investment in different aspects of security outfits in Nigeria represents the independent variable while the differentials in the Gross Domestic Product(GDP) and Human Development Index(HDI) are the measures of the dependent variable. Descriptive statistics and Simple Linear Regression analytical tool were employed in the data analysis. The result revealed that Insurgency/Insecurity negatively affect the economic development of the different parts of Nigeria. Following the findings, a model to analyse the effect of insecurity and insurgency was developed, named INSECUREDEVNIG. It implies that the economic development of Nigeria will continue to deteriorate if insurgency and insecurity continue. The study therefore recommends that the government should do all it could to nurture its human capital, adequately fund the state security apparatus and employ individuals of high integrity to manage the various security outfits in Nigeria. The government should also as a matter of urgency train the security personnel in intelligence cum Information and Communications Technology to enable them ensure the effectiveness of implementation of security policies needed to sustain Gross Domestic Product and Human Capital Index of Nigeria.Keywords: insecurity, insurgency, gross domestic product, human development index, Nigeria
Procedia PDF Downloads 1061808 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 951807 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities
Authors: Kung-Jen Tu, Danny Vernatha
Abstract:
To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.Keywords: database, electricity sub-meters, energy anomaly detection, sensor
Procedia PDF Downloads 3091806 Milling Simulations with a 3-DOF Flexible Planar Robot
Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden
Abstract:
Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.Keywords: control, milling, multibody, robotic, simulation
Procedia PDF Downloads 2501805 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty
Authors: Ben Khayut, Lina Fabri, Maya Avikhana
Abstract:
The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.Keywords: computational brain, mind, psycholinguistic, system, under uncertainty
Procedia PDF Downloads 1821804 Transforming Data Science Curriculum Through Design Thinking
Authors: Samar Swaid
Abstract:
Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.Keywords: data science, design thinking, AI, currculum, transformation
Procedia PDF Downloads 841803 Determinants of Hospital Obstetric Unit Closures in the United States 2002-2013: Loss of Hospital Obstetric Care 2002-2013
Authors: Peiyin Hung, Katy Kozhimannil, Michelle Casey, Ira Moscovice
Abstract:
Background/Objective: The loss of obstetric services has been a pressing concern in urban and rural areas nationwide. This study aims to determine factors that contribute to the loss of obstetric care through closures of a hospital or obstetric unit. Methods: Data from 2002-2013 American Hospital Association annual surveys were used to identify hospitals providing obstetric services. We linked these data to Medicare Healthcare Cost Report Information for hospital financial indicators, the US Census Bureau’s American Community Survey for zip-code level characteristics, and Area Health Resource files for county- level clinician supply measures. A discrete-time multinomial logit model was used to determine contributing factors to obstetric unit or hospital closures. Results: Of 3,551 hospitals providing obstetrics services during 2002-2013, 82% kept units open, 12% stopped providing obstetrics services, and 6% closed down completely. State-level variations existed. Factors that significantly increased hospitals’ probability of obstetric unit closures included lower than 250 annual birth volume (adjusted marginal effects [95% confidence interval]=34.1% [28%, 40%]), closer proximity to another hospital with obstetric services (per 10 miles: -1.5% [-2.4, -0.5%]), being in a county with lower family physician supply (-7.8% [-15.0%, -0.6%), being in a zip code with higher percentage of non-white females (per 10%: 10.2% [2.1%, 18.3%]), and with lower income (per $1,000 income: -0.14% [-0.28%, -0.01%]). Conclusions: Over the past 12 years, loss of obstetric services has disproportionately affected areas served by low-volume urban and rural hospitals, non-white and low-income communities, and counties with fewer family physicians, signaling a need to address maternity care access in these communities.Keywords: access to care, obstetric care, service line discontinuation, hospital, obstetric unit closures
Procedia PDF Downloads 2231802 Characterization of the Groundwater Aquifers at El Sadat City by Joint Inversion of VES and TEM Data
Authors: Usama Massoud, Abeer A. Kenawy, El-Said A. Ragab, Abbas M. Abbas, Heba M. El-Kosery
Abstract:
Vertical Electrical Sounding (VES) and Transient Electro Magnetic (TEM) survey have been applied for characterizing the groundwater aquifers at El Sadat industrial area. El-Sadat city is one of the most important industrial cities in Egypt. It has been constructed more than three decades ago at about 80 km northwest of Cairo along the Cairo–Alexandria desert road. Groundwater is the main source of water supplies required for domestic, municipal, and industrial activities in this area due to the lack of surface water sources. So, it is important to maintain this vital resource in order to sustain the development plans of this city. In this study, VES and TEM data were identically measured at 24 stations along three profiles trending NE–SW with the elongation of the study area. The measuring points were arranged in a grid like pattern with both inter-station spacing and line–line distance of about 2 km. After performing the necessary processing steps, the VES and TEM data sets were inverted individually to multi-layer models, followed by a joint inversion of both data sets. Joint inversion process has succeeded to overcome the model-equivalence problem encountered in the inversion of individual data set. Then, the joint models were used for the construction of a number of cross sections and contour maps showing the lateral and vertical distribution of the geo-electrical parameters in the subsurface medium. Interpretation of the obtained results and correlation with the available geological and hydrogeological information revealed TWO aquifer systems in the area. The shallow Pleistocene aquifer consists of sand and gravel saturated with fresh water and exhibits large thickness exceeding 200 m. The deep Pliocene aquifer is composed of clay and sand and shows low resistivity values. The water bearing layer of the Pleistocene aquifer and the upper surface of Pliocene aquifer are continuous and no structural features have cut this continuity through the investigated area.Keywords: El Sadat city, joint inversion, VES, TEM
Procedia PDF Downloads 3711801 Understanding the Information in Principal Component Analysis of Raman Spectroscopic Data during Healing of Subcritical Calvarial Defects
Authors: Rafay Ahmed, Condon Lau
Abstract:
Bone healing is a complex and sequential process involving changes at the molecular level. Raman spectroscopy is a promising technique to study bone mineral and matrix environments simultaneously. In this study, subcritical calvarial defects are used to study bone composition during healing without discomposing the fracture. The model allowed to monitor the natural healing of bone avoiding mechanical harm to the callus. Calvarial defects were created using 1mm burr drill in the parietal bones of Sprague-Dawley rats (n=8) that served in vivo defects. After 7 days, their skulls were harvested after euthanizing. One additional defect per sample was created on the opposite parietal bone using same calvarial defect procedure to serve as control defect. Raman spectroscopy (785 nm) was established to investigate bone parameters of three different skull surfaces; in vivo defects, control defects and normal surface. Principal component analysis (PCA) was utilized for the data analysis and interpretation of Raman spectra and helped in the classification of groups. PCA was able to distinguish in vivo defects from normal surface and control defects. PC1 shows that the major variation at 958 cm⁻¹, which corresponds to ʋ1 phosphate mineral band. PC2 shows the major variation at 1448 cm⁻¹ which is the characteristic band of CH2 deformation and corresponds to collagens. Raman parameters, namely, mineral to matrix ratio and crystallinity was found significantly decreased in the in vivo defects compared to surface and controls. Scanning electron microscope and optical microscope images show the formation of newly generated matrix by means of bony bridges of collagens. Optical profiler shows that surface roughness increased by 30% from controls to in vivo defects after 7 days. These results agree with Raman assessment parameters and confirm the new collagen formation during healing.Keywords: Raman spectroscopy, principal component analysis, calvarial defects, tissue characterization
Procedia PDF Downloads 2251800 An Evaluation of a Student Peer Mentoring Program
Authors: Nazeema Ahmed
Abstract:
This paper reports on the development of a student peer mentoring programme at a higher education institution. The programme is dependent on volunteering senior undergraduate students who are trained to mentor first-year students studying towards an engineering degree. The evaluation of the programme took the form of first-year students completing a self-report paper questionnaire at the onset of a lecture and mentors completing their questionnaire electronically. The evaluation yielded mixed findings. Peer mentoring clearly benefited some students in their adjustment to the institution. Specific mentors’ personal attributes enabled the establishment of successful mentoring relationships, where encouragement, advice and academic assistance was provided. Gains were reciprocal with mentors reporting that the programme contributed towards their personal development. Confidence in the programme was expressed in mentors feeling that it was an initiative worth continuing and first-year students agreeing that it be recommended to future first-year students. This was despite many unfavourable experiences of mentors where their professionalism and commitment to the programme was suspect. It is evident that while mentors began with noble intentions they appear either to lose interest or become overwhelmed with their own workload as the academic year progresses. On the other hand, some mentors reported feeling challenged by the apathy of first-year students who failed to maximise the opportunity available to them. The different attitudes towards mentoring that manifested as a mentoring culture in some departments were particularly pertinent to its successful implementation. The findings point to the key role of academic staff in the mentoring programme who model the mentoring relationship in their interaction with student mentors. While their involvement in the programme may be perceived as a drain on resources in an already demanding academic teaching environment, it is imperative that structural changes be put in place for the programme to be both efficient and sustainable. A pervasive finding concerns the evolving institutional culture of student development in the faculty. Mentors and first-year students alike alluded to the potential of the mentoring programme provided it is seriously endorsed at both the departmental and faculty level. The findings provide a foundation from which to develop the programme further and to begin improving its capacity for maximizing student retention in South African higher education.Keywords: engineering students, first-year students, peer mentoring
Procedia PDF Downloads 2561799 Investigation of Aerodynamic and Design Features of Twisting Tall Buildings
Authors: Sinan Bilgen, Bekir Ozer Ay, Nilay Sezer Uzol
Abstract:
After decades of conventional shapes, irregular forms with complex geometries are getting more popular for form generation of tall buildings all over the world. This trend has recently brought out diverse building forms such as twisting tall buildings. This study investigates both the aerodynamic and design features of twisting tall buildings through comparative analyses. Since twisting a tall building give rise to additional complexities related with the form and structural system, lateral load effects become of greater importance on these buildings. The aim of this study is to analyze the inherent characteristics of these iconic forms by comparing the wind loads on twisting tall buildings with those on their prismatic twins. Through a case study research, aerodynamic analyses of an existing twisting tall building and its prismatic counterpart were performed and the results have been compared. The prismatic twin of the original building were generated by removing the progressive rotation of its floors with the same plan area and story height. Performance-based measures under investigation have been evaluated in conjunction with the architectural design. Aerodynamic effects have been analyzed by both wind tunnel tests and computational methods. High frequency base balance tests and pressure measurements on 3D models were performed to evaluate wind load effects on a global and local scale. Comparisons of flat and real surface models were conducted to further evaluate the effects of the twisting form without façade texture contribution. Comparisons highlighted that, the twisting form under investigation shows better aerodynamic behavior both for along wind but particularly for across wind direction. Compared to the prismatic counterpart; twisting model is superior on reducing vortex-shedding dynamic response by disorganizing the wind vortices. Consequently, despite the difficulties arisen from inherent complexity of twisted forms, they could still be feasible and viable with their attractive images in the realm of tall buildings.Keywords: aerodynamic tests, motivation for twisting, tall buildings, twisted forms, wind excitation
Procedia PDF Downloads 2371798 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber
Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap
Abstract:
Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber
Procedia PDF Downloads 1661797 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy
Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu
Abstract:
Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR
Procedia PDF Downloads 711796 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow
Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam
Abstract:
Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety
Procedia PDF Downloads 3061795 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity
Authors: Shivdayal Patel, Suhail Ahmad
Abstract:
Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling
Procedia PDF Downloads 2811794 Design and Analysis for a 4-Stage Crash Energy Management System for Railway Vehicles
Authors: Ziwen Fang, Jianran Wang, Hongtao Liu, Weiguo Kong, Kefei Wang, Qi Luo, Haifeng Hong
Abstract:
A 4-stage crash energy management (CEM) system for subway rail vehicles used by Massachusetts Bay Transportation Authority (MBTA) in the USA is developed in this paper. The 4 stages of this new CEM system include 1) energy absorbing coupler (draft gear and shear bolts), 2) primary energy absorbers (aluminum honeycomb structured box), 3) secondary energy absorbers (crush tube), and 4) collision post and corner post. A sliding anti-climber and a fixed anti-climber are designed at the front of the vehicle cooperating with the 4-stage CEM to maximize the energy to be absorbed and minimize the damage to passengers and crews. In order to investigate the effectiveness of this CEM system, both finite element (FE) methods and crashworthiness test have been employed. The whole vehicle consists of 3 married pairs, i.e., six cars. In the FE approach, full-scale railway car models are developed and different collision cases such as a single moving car impacting a rigid wall, two moving cars into a rigid wall, two moving cars into two stationary cars, six moving cars into six stationary cars and so on are investigated. The FE analysis results show that the railway vehicle incorporating this CEM system has a superior crashworthiness performance. In the crashworthiness test, a simplified vehicle front end including the sliding anti-climber, the fixed anti-climber, the primary energy absorbers, the secondary energy absorber, the collision post and the corner post is built and impacted to a rigid wall. The same test model is also analyzed in the FE and the results such as crushing force, stress, and strain of critical components, acceleration and velocity curves are compared and studied. FE results show very good comparison to the test results.Keywords: railway vehicle collision, crash energy management design, finite element method, crashworthiness test
Procedia PDF Downloads 4041793 A New Co(II) Metal Complex Template with 4-dimethylaminopyridine Organic Cation: Structural, Hirshfeld Surface, Phase Transition, Electrical Study and Dielectric Behavior
Authors: Mohamed dammak
Abstract:
Great attention has been paid to the design and synthesis of novel organic-inorganic compounds in recent decades because of their structural variety and the large diversity of atomic arrangements. In this work, the structure for the novel dimethyl aminopyridine tetrachlorocobaltate (C₇H₁₁N₂)₂CoCl₄ prepared by the slow evaporation method at room temperature has been successfully discussed. The X-ray diffraction results indicate that the hybrid material has a triclinic structure with a P space group and features a 0D structure containing isolated distorted [CoCl₄]2- tetrahedra interposed between [C7H11N²⁻]+ cations forming planes perpendicular to the c axis at z = 0 and z = ½. The effect of the synthesis conditions and the reactants used, the interactions between the cationic planes, and the isolated [CoCl4]2- tetrahedra are employing N-H...Cl and C-H…Cl hydrogen bonding contacts. The inspection of the Hirshfeld surface analysis helps to discuss the strength of hydrogen bonds and to quantify the inter-contacts. A phase transition was discovered by thermal analysis at 390 K, and comprehensive dielectric research was reported, showing a good agreement with thermal data. Impedance spectroscopy measurements were used to study the electrical and dielectric characteristics over a wide range of frequencies and temperatures, 40 Hz–10 MHz and 313–483 K, respectively. The Nyquist plot (Z" versus Z') from the complex impedance spectrum revealed semicircular arcs described by a Cole-Cole model. An electrical circuit consisting of a link of grain and grain boundary elements is employed. The real and imaginary parts of dielectric permittivity, as well as tg(δ) of (C₇H₁₁N₂)₂CoCl₄ at different frequencies, reveal a distribution of relaxation times. The presence of grain and grain boundaries is confirmed by the modulus investigations. Electric and dielectric analyses highlight the good protonic conduction of this material.Keywords: organic-inorganic, phase transitions, complex impedance, protonic conduction, dielectric analysis
Procedia PDF Downloads 851792 Contrasted Mean and Median Models in Egyptian Stock Markets
Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid
Abstract:
Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming
Procedia PDF Downloads 3151791 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 3151790 Physical Activity Self-Efficacy among Pregnant Women with High Risk for Gestational Diabetes Mellitus: A Cross-Sectional Study
Authors: Xiao Yang, Ji Zhang, Yingli Song, Hui Huang, Jing Zhang, Yan Wang, Rongrong Han, Zhixuan Xiang, Lu Chen, Lingling Gao
Abstract:
Aim and Objectives: To examine physical activity self-efficacy, identify its predictors, and further explore the mechanism of action among the predictors in mainland Chinese pregnant women with high risk for gestational diabetes mellitus (GDM). Background: Physical activity could protect pregnant women from developing GDM. Physical activity self-efficacy was the key predictor of physical activity. Design: A cross-sectional study was conducted from October 2021 to May 2022 in Zhengzhou, China. Methods: 252 eligible pregnant women completed the Pregnancy Physical Activity Self-efficacy Scale, the Social Support for Physical Activity Scale, the Knowledge on Physical Activity Questionnaire, the 7-item Generalized Anxiety Disorder scale, the Edinburgh Postnatal Depression Scale, and a socio-demographic data sheet. Multiple linear regression was applied to explore the predictors of physical activity self-efficacy. Structural equation modeling was used to explore the mechanism of action among the predictors. Results: Chinese pregnant women with a high risk for GDM reported a moderate level of physical activity self-efficacy. The best-fit regression analysis revealed four variables explained 17.5% of the variance in physical activity self-efficacy. Social support for physical activity was the strongest predictor, followed by knowledge of the physical activity, intention to do physical activity, and anxiety symptoms. The model analysis indicated that knowledge of physical activity could release anxiety and depressive symptoms and then increase physical activity self-efficacy. Conclusion: The present study revealed a moderate level of physical activity self-efficacy. Interventions targeting pregnant women with high risk for GDM need to include the predictors of physical activity self-efficacy. Relevance to clinical practice: To facilitate pregnant women with high risk for GDM to engage in physical activity, healthcare professionals may find assess physical activity self-efficacy and intervene as soon as possible on their first antenatal visit. Physical activity intervention programs focused on self-efficacy may be conducted in further research.Keywords: physical activity, gestational diabetes, self-efficacy, predictors
Procedia PDF Downloads 1081789 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity
Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois
Abstract:
With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation
Procedia PDF Downloads 3311788 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil
Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins
Abstract:
Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.Keywords: biomonitoring, exposure, PBPK modelling, toxic elements
Procedia PDF Downloads 3211787 Microfluidic Device for Real-Time Electrical Impedance Measurements of Biological Cells
Authors: Anil Koklu, Amin Mansoorifar, Ali Beskok
Abstract:
Dielectric spectroscopy (DS) is a noninvasive, label free technique for a long term real-time measurements of the impedance spectra of biological cells. DS enables characterization of cellular dielectric properties such as membrane capacitance and cytoplasmic conductivity. We have developed a lab-on-a-chip device that uses an electro-activated microwells array for loading, DS measurements, and unloading of biological cells. We utilized from dielectrophoresis (DEP) to capture target cells inside the wells and release them after DS measurement. DEP is a label-free technique that exploits differences among dielectric properties of the particles. In detail, DEP is the motion of polarizable particles suspended in an ionic solution and subjected to a spatially non-uniform external electric field. To the best of our knowledge, this is the first microfluidic chip that combines DEP and DS to analyze biological cells using electro-activated wells. Device performance is tested using two different cell lines of prostate cancer cells (RV122, PC-3). Impedance measurements were conducted at 0.2 V in the 10 kHz to 40 MHz range with 6 s time resolution. An equivalent circuit model was developed to extract the cell membrane capacitance and cell cytoplasmic conductivity from the impedance spectra. We report the time course of the variations in dielectric properties of PC-3 and RV122 cells suspended in low conductivity medium (LCB), which enhances dielectrophoretic and impedance responses, and their response to sudden pH change from a pH of 7.3 to a pH of 5.8. It is shown that microfluidic chip allowed online measurements of dielectric properties of prostate cancer cells and the assessment of the cellular level variations under external stimuli such as different buffer conductivity and pH. Based on these data, we intend to deploy the current device for single cell measurements by fabricating separately addressable N × N electrode platforms. Such a device will allow time-dependent dielectric response measurements for individual cells with the ability of selectively releasing them using negative-DEP and pressure driven flow.Keywords: microfluidic, microfabrication, lab on a chip, AC electrokinetics, dielectric spectroscopy
Procedia PDF Downloads 1531786 Relevance of the Judgements Given by the International Court of Justice with Regard to South China Sea Vis-A-Vis Marshall Islands
Authors: Hitakshi Mahendru, Advait Tambe, Simran Chandok, Niharika Sanadhya
Abstract:
After the Second World War had come to an end, the Founding Fathers of the United Nations recognized a need for a supreme peacekeeping mechanism to act as a mediator between nations and moderate disputes that might blow up, if left unchecked. It has been more than seven decades since the establishment of the International Court of Justice (ICJ). When it was created, there were certain aim and objectives that the ICJ was intended to achieve. However, in today’s world, with change in political dynamics and international relations between countries, the ICJ has not succeeded in achieving several of these objectives. The ICJ is the only body in the international scenario that has the authority to regulate disputes between countries. However, in recent times, with countries like China disregarding the importance of the ICJ, there is no hope for the ICJ to command respect from other nations, thereby sending ICJ on a slow, yet steady path towards redundancy. The authority of the judgements given by the International Court of Justice, which is one of the main pillars of the United Nations, is questionable due to the forthcoming reactions from various countries on public platforms. The ICJ’s principal role within the United Nations framework is to settle peacefully international/bilateral disputes between the states that come under its jurisdiction and in accordance with the principles laid down in international law. By shedding light on the public backlash from the Chinese Government to the recent South China Sea judgement, we see the decreasing relevance of the ICJ in the contemporary world scenario. Philippines and China have wrangled over territory in the South China Sea for centuries but after the recent judgement the tension has reached an all-time high with China threatening to prosecute anybody as trespassers while continuing to militarise the disputed area. This paper will deal with the South China Sea judgement and the manner in which it has been received by the Chinese Government. Also, it will look into the consequences of counter-back. The authors will also look into the Marshall Island matter and propose a model judgement, in accordance with the principles of international law that would be the most suited for the given situation. Also, the authors will propose amendments in the working of the Security Council to ensure that the Marshal Island judgement is passed and accepted by the countries without any contempt.Keywords: International Court of Justice, international law, Marshall Islands, South China Sea, United Nations Charter
Procedia PDF Downloads 2991785 A Dissipative Particle Dynamics Study of a Capsule in Microfluidic Intracellular Delivery System
Authors: Nishanthi N. S., Srikanth Vedantam
Abstract:
Intracellular delivery of materials has always proved to be a challenge in research and therapeutic applications. Usually, vector-based methods, such as liposomes and polymeric materials, and physical methods, such as electroporation and sonoporation have been used for introducing nucleic acids or proteins. Reliance on exogenous materials, toxicity, off-target effects was the short-comings of these methods. Microinjection was an alternative process which addressed the above drawbacks. However, its low throughput had hindered its adoption widely. Mechanical deformation of cells by squeezing them through constriction channel can cause the temporary development of pores that would facilitate non-targeted diffusion of materials. Advantages of this method include high efficiency in intracellular delivery, a wide choice of materials, improved viability and high throughput. This cell squeezing process can be studied deeper by employing simple models and efficient computational procedures. In our current work, we present a finite sized dissipative particle dynamics (FDPD) model to simulate the dynamics of the cell flowing through a constricted channel. The cell is modeled as a capsule with FDPD particles connected through a spring network to represent the membrane. The total energy of the capsule is associated with linear and radial springs in addition to constraint of the fixed area. By performing detailed simulations, we studied the strain on the membrane of the capsule for channels with varying constriction heights. The strain on the capsule membrane was found to be similar though the constriction heights vary. When strain on the membrane was correlated to the development of pores, we found higher porosity in capsule flowing in wider channel. This is due to localization of strain to a smaller region in the narrow constriction channel. But the residence time of the capsule increased as the channel constriction narrowed indicating that strain for an increased time will cause less cell viability.Keywords: capsule, cell squeezing, dissipative particle dynamics, intracellular delivery, microfluidics, numerical simulations
Procedia PDF Downloads 1421784 Assessing the Impacts of Riparian Land Use on Gully Development and Sediment Load: A Case Study of Nzhelele River Valley, Limpopo Province, South Africa
Authors: B. Mavhuru, N. S. Nethengwe
Abstract:
Human activities on land degradation have triggered several environmental problems especially in rural areas that are underdeveloped. The main aim of this study is to analyze the contribution of different land uses to gully development and sediment load on the Nzhelele River Valley in the Limpopo Province. Data was collected using different methods such as observation, field data techniques and experiments. Satellite digital images, topographic maps, aerial photographs and the sediment load static model also assisted in determining how land use affects gully development and sediment load. For data analysis, the researcher used the following methods: Analysis of Variance (ANOVA), descriptive statistics, Pearson correlation coefficient and statistical correlation methods. The results of the research illustrate that high land use activities create negative changes especially in areas that are highly fragile and vulnerable. Distinct impact on land use change was observed within settlement area (9.6 %) within a period of 5 years. High correlation between soil organic matter and soil moisture (R=0.96) was observed. Furthermore, a significant variation (p ≤ 0.6) between the soil organic matter and soil moisture was also observed. A very significant variation (p ≤ 0.003) was observed in bulk density and extreme significant variations (p ≤ 0.0001) were observed in organic matter and soil particle size. The sand mining and agricultural activities has contributed significantly to the amount of sediment load in the Nzhelele River. A high significant amount of total suspended sediment (55.3 %) and bed load (53.8 %) was observed within the agricultural area. The connection which associates the development of gullies to various land use activities determines the amount of sediment load. These results are consistent with other previous research and suggest that land use activities are likely to exacerbate the development of gullies and sediment load in the Nzhelele River Valley.Keywords: drainage basin, geomorphological processes, gully development, land degradation, riparian land use and sediment load
Procedia PDF Downloads 3111783 Optimization of Alkali Assisted Microwave Pretreatments of Sorghum Straw for Efficient Bioethanol Production
Authors: Bahiru Tsegaye, Chandrajit Balomajumder, Partha Roy
Abstract:
The limited supply and related negative environmental consequence of fossil fuels are driving researcher for finding sustainable sources of energy. Lignocellulose biomass like sorghum straw is considered as among cheap, renewable and abundantly available sources of energy. However, lignocellulose biomass conversion to bioenergy like bioethanol is hindered due to the reluctant nature of lignin in the biomass. Therefore, removal of lignin is a vital step for lignocellulose conversion to renewable energy. The aim of this study is to optimize microwave pretreatment conditions using design expert software to remove lignin and to release maximum possible polysaccharides from sorghum straw for efficient hydrolysis and fermentation process. Sodium hydroxide concentration between 0.5-1.5%, v/v, pretreatment time from 5-25 minutes and pretreatment temperature from 120-2000C were considered to depolymerize sorghum straw. The effect of pretreatment was studied by analyzing the compositional changes before and after pretreatments following renewable energy laboratory procedure. Analysis of variance (ANOVA) was used to test the significance of the model used for optimization. About 32.8%-48.27% of hemicellulose solubilization, 53% -82.62% of cellulose release, and 49.25% to 78.29% lignin solubilization were observed during microwave pretreatment. Pretreatment for 10 minutes with alkali concentration of 1.5% and temperature of 1400C released maximum cellulose and lignin. At this optimal condition, maximum of 82.62% of cellulose release and 78.29% of lignin removal was achieved. Sorghum straw at optimal pretreatment condition was subjected to enzymatic hydrolysis and fermentation. The efficiency of hydrolysis was measured by analyzing reducing sugars by 3, 5 dinitrisylicylic acid method. Reducing sugars of about 619 mg/g of sorghum straw were obtained after enzymatic hydrolysis. This study showed a significant amount of lignin removal and cellulose release at optimal condition. This enhances the yield of reducing sugars as well as ethanol yield. The study demonstrates the potential of microwave pretreatments for enhancing bioethanol yield from sorghum straw.Keywords: cellulose, hydrolysis, lignocellulose, optimization
Procedia PDF Downloads 273