Search results for: water pipeline model
1631 Networks, Regulations and Public Action: The Emerging Experiences of Sao Paulo
Authors: Lya Porto, Giulia Giacchè, Mario Aquino Alves
Abstract:
The paper aims to describe the linkage between government and civil society proposing a study on agro-ecological agriculture policy and urban action in São Paulo city underling the main achievements obtained. The negotiation processes between social movements and the government (inputs) and its results on political regulation and public action for Urban Agriculture (UA) in São Paulo city (outputs) have been investigated. The method adopted is qualitative, with techniques of semi-structured interviews, participant observation, and documental analysis. The authors conducted 30 semi-structured interviews with organic farmers, activists, governmental and non-governmental managers. Participant observation was conducted in public gardens, urban farms, public audiences, democratic councils, and social movements meetings. Finally, public plans and laws were also analyzed. São Paulo city with around 12 million inhabitants spread out in a 1522 km2 is the economic capital of Brazil, marked by spatial and socioeconomic segregation, currently aggravated by environmental crisis, characterized by water scarcity, pollution, and climate changes. In recent years, Urban Agriculture (UA) social movements gained strength and struggle for a different city with more green areas, organic food production, and public occupation. As the dynamics of UA occurs by the action of multiple actresses and institutions that struggle to build multiple senses on UA, the analysis will be based on literature about solidarity economy, governance, public action and networks. Those theories will mark out the analysis that will emphasize the approach of inter-subjectivity built between subjects, as well as the hybrid dynamics of multiple actors and spaces in the construction of policies for UA. Concerning UA we identified four main typologies based on land ownership, main function (economic or activist), form of organization of the space, and type of production (organic or not). The City Hall registers 500 productive unities of agriculture, with around 1500 producers, but researcher estimated a larger number of unities. Concerning the social movements we identified three categories that differ in goals and types of organization, but all of them work by networks of activists and/or organizations. The first category does not consider themselves as a movement, but a network. They occupy public spaces to grow organic food and to propose another type of social relations in the city. This action is similar to what became known as the green guerrillas. The second is configured as a movement that is structured to raise awareness about agro-ecological activities. The third one is a network of social movements, farmers, organizations and politicians that work focused on pressure and negotiation with executive and legislative government to approve regulations and policies on organic and agro-ecological Urban Agriculture. We conclude by highlighting how the interaction among institutions and civil society produced important achievements for recognition and implementation of UA within the city. Some results of this process are awareness for local production, legal and institutional recognition of the rural zone around the city into the planning tool, the investment on organic school public procurements, the establishment of participatory management of public squares, the inclusion of UA on Municipal Strategic Plan and Master Plan.Keywords: public action, policies, agroecology, urban and peri-urban agriculture, Sao Paulo
Procedia PDF Downloads 2931630 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty
Authors: Ben Khayut, Lina Fabri, Maya Avikhana
Abstract:
The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.Keywords: computational brain, mind, psycholinguistic, system, under uncertainty
Procedia PDF Downloads 1761629 The Political Economy of Green Trade in the Context of US-China Trade War: A Case Study of US Biofuels and Soybeans
Authors: Tonghua Li
Abstract:
Under the neoliberal corporate food regime, biofuels are a double-edged sword that exacerbates tensions between national food security and trade in green agricultural products. Biofuels have the potential to help achieve green sustainable development goals, but they threaten food security by exacerbating competition for land and changing global food trade patterns. The U.S.-China trade war complicates this debate. Under the influence of different political and corporate coordination mechanisms in China and the US, trade disputes can have different impacts on sustainable agricultural practices. This paper develops an actor-centred ‘network governance framework’ focusing on trade in soybean and corn-based biofuels to explain how trade wars can change the actions of governmental and non-governmental actors in the context of oligopolistic competition and market concentration in agricultural trade. There is evidence that the US-China trade decoupling exacerbates the conflict between national security, free trade in agriculture, and the realities and needs of green and sustainable energy development. The US government's trade policies reflect concerns about China's relative gains, leading to a loss of trade profits, making it impossible for the parties involved to find a balance between the three objectives and, consequently, to get into a biofuels and soybean industry dilemma. Within the setting of prioritizing national security and strategic interests, the government has replaced the dominant position of large agribusiness in the neoliberal food system, and the goal of environmental sustainability has been marginalized by high politics. In contrast, China faces tensions in the trade war between food security self-sufficiency policy and liberal sustainable trade, but the state-capitalist model ensures policy coordination and coherence in trade diversion and supply chain adjustment. Despite ongoing raw material shortages and technological challenges, China remains committed to playing a role in global environmental governance and promoting green trade objectives.Keywords: food security, green trade, biofuels, soybeans, US-China trade war
Procedia PDF Downloads 51628 Current Zonal Isolation Regulation and Standards: A Compare and Contrast Review in Plug and Abandonment
Authors: Z. A. Al Marhoon, H. S. Al Ramis, C. Teodoriu
Abstract:
Well-integrity is one of the major elements considered for drilling geothermal, oil, and gas wells. Well-integrity is minimizing the risk of unplanned fluid flow in the well bore throughout the well lifetime. Well integrity is maximized by applying technical concepts along with practical practices and strategic planning. These practices are usually governed by standardization and regulation entities. Practices during well construction can affect the integrity of the seal at the time of abandonment. On the other hand, achieving a perfect barrier system is impracticable due to the needed cost. This results in a needed balance between regulations requirements and practical applications. The guidelines are only effective when they are attainable in practical applications. Various governmental regulations and international standards have different guidelines on what constitutes high-quality isolation from unwanted flow. Each regulating or standardization body differ in requirements based on the abandonment objective. Some regulation account more for the environmental impact, water table contamination, and possible leaks. Other regulation might lean towards driving more economical benefits while achieving an acceptable isolation criteria. The research methodology used in this topic is derived from a literature review method combined with a compare and contrast analysis. The literature review on various zonal isolation regulations and standards has been conducted. A review includes guidelines from NORSOK (Norwegian governing entity), BSEE (USA offshore governing entity), API (American Petroleum Institute) combined with ISO (International Standardization Organization). The compare and contrast analysis is conducted by assessing the objective of each abandonment regulations and standardization. The current state of well barrier regulation is in balancing action. From one side of this balance, the environmental impact and complete zonal isolation is considered. The other side of the scale is practical application and associated cost. Some standards provide a fair amount of details concerning technical requirements and are often flexible with the needed associated cost. These guidelines cover environmental impact with laws that prevent major or disastrous environmental effects of improper sealing of wells. Usually these regulations are concerned with the near future of sealing rather than long-term. Consequently, applying these guidelines become more feasible from a cost point of view to the required plugging entities. On the other hand, other regulation have well integrity procedures and regulations that lean toward more restrictions environmentally with an increased associated cost requirements. The environmental impact is detailed and covered with its entirety, including medium to small environmental impact in barrier installing operations. Clear and precise attention to long-term leakage prevention is present in these regulations. The result of the compare and contrast analysis of the literature showed that there are various objectives that might tip the scale from one side of the balance (cost) to the other (sealing quality) especially in reference to zonal isolation. Furthermore, investing in initial well construction is a crucial part of ensuring safe final well abandonment. The safety and the cost saving at the end of the well life cycle is dependent upon a well-constructed isolation systems at the beginning of the life cycle. Long term studies on zonal isolation using various hydraulic or mechanical materials need to take place to further assess permanently abandoned wells to achieve the desired balance. Well drilling and isolation techniques will be more effective when they are operationally feasible and have reasonable associated cost to aid the local economy.Keywords: plug and abandon, P&A regulation, P&A standards, international guidelines, gap analysis
Procedia PDF Downloads 1321627 Transforming Data Science Curriculum Through Design Thinking
Authors: Samar Swaid
Abstract:
Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.Keywords: data science, design thinking, AI, currculum, transformation
Procedia PDF Downloads 791626 Determinants of Hospital Obstetric Unit Closures in the United States 2002-2013: Loss of Hospital Obstetric Care 2002-2013
Authors: Peiyin Hung, Katy Kozhimannil, Michelle Casey, Ira Moscovice
Abstract:
Background/Objective: The loss of obstetric services has been a pressing concern in urban and rural areas nationwide. This study aims to determine factors that contribute to the loss of obstetric care through closures of a hospital or obstetric unit. Methods: Data from 2002-2013 American Hospital Association annual surveys were used to identify hospitals providing obstetric services. We linked these data to Medicare Healthcare Cost Report Information for hospital financial indicators, the US Census Bureau’s American Community Survey for zip-code level characteristics, and Area Health Resource files for county- level clinician supply measures. A discrete-time multinomial logit model was used to determine contributing factors to obstetric unit or hospital closures. Results: Of 3,551 hospitals providing obstetrics services during 2002-2013, 82% kept units open, 12% stopped providing obstetrics services, and 6% closed down completely. State-level variations existed. Factors that significantly increased hospitals’ probability of obstetric unit closures included lower than 250 annual birth volume (adjusted marginal effects [95% confidence interval]=34.1% [28%, 40%]), closer proximity to another hospital with obstetric services (per 10 miles: -1.5% [-2.4, -0.5%]), being in a county with lower family physician supply (-7.8% [-15.0%, -0.6%), being in a zip code with higher percentage of non-white females (per 10%: 10.2% [2.1%, 18.3%]), and with lower income (per $1,000 income: -0.14% [-0.28%, -0.01%]). Conclusions: Over the past 12 years, loss of obstetric services has disproportionately affected areas served by low-volume urban and rural hospitals, non-white and low-income communities, and counties with fewer family physicians, signaling a need to address maternity care access in these communities.Keywords: access to care, obstetric care, service line discontinuation, hospital, obstetric unit closures
Procedia PDF Downloads 2201625 Understanding the Information in Principal Component Analysis of Raman Spectroscopic Data during Healing of Subcritical Calvarial Defects
Authors: Rafay Ahmed, Condon Lau
Abstract:
Bone healing is a complex and sequential process involving changes at the molecular level. Raman spectroscopy is a promising technique to study bone mineral and matrix environments simultaneously. In this study, subcritical calvarial defects are used to study bone composition during healing without discomposing the fracture. The model allowed to monitor the natural healing of bone avoiding mechanical harm to the callus. Calvarial defects were created using 1mm burr drill in the parietal bones of Sprague-Dawley rats (n=8) that served in vivo defects. After 7 days, their skulls were harvested after euthanizing. One additional defect per sample was created on the opposite parietal bone using same calvarial defect procedure to serve as control defect. Raman spectroscopy (785 nm) was established to investigate bone parameters of three different skull surfaces; in vivo defects, control defects and normal surface. Principal component analysis (PCA) was utilized for the data analysis and interpretation of Raman spectra and helped in the classification of groups. PCA was able to distinguish in vivo defects from normal surface and control defects. PC1 shows that the major variation at 958 cm⁻¹, which corresponds to ʋ1 phosphate mineral band. PC2 shows the major variation at 1448 cm⁻¹ which is the characteristic band of CH2 deformation and corresponds to collagens. Raman parameters, namely, mineral to matrix ratio and crystallinity was found significantly decreased in the in vivo defects compared to surface and controls. Scanning electron microscope and optical microscope images show the formation of newly generated matrix by means of bony bridges of collagens. Optical profiler shows that surface roughness increased by 30% from controls to in vivo defects after 7 days. These results agree with Raman assessment parameters and confirm the new collagen formation during healing.Keywords: Raman spectroscopy, principal component analysis, calvarial defects, tissue characterization
Procedia PDF Downloads 2211624 An Evaluation of a Student Peer Mentoring Program
Authors: Nazeema Ahmed
Abstract:
This paper reports on the development of a student peer mentoring programme at a higher education institution. The programme is dependent on volunteering senior undergraduate students who are trained to mentor first-year students studying towards an engineering degree. The evaluation of the programme took the form of first-year students completing a self-report paper questionnaire at the onset of a lecture and mentors completing their questionnaire electronically. The evaluation yielded mixed findings. Peer mentoring clearly benefited some students in their adjustment to the institution. Specific mentors’ personal attributes enabled the establishment of successful mentoring relationships, where encouragement, advice and academic assistance was provided. Gains were reciprocal with mentors reporting that the programme contributed towards their personal development. Confidence in the programme was expressed in mentors feeling that it was an initiative worth continuing and first-year students agreeing that it be recommended to future first-year students. This was despite many unfavourable experiences of mentors where their professionalism and commitment to the programme was suspect. It is evident that while mentors began with noble intentions they appear either to lose interest or become overwhelmed with their own workload as the academic year progresses. On the other hand, some mentors reported feeling challenged by the apathy of first-year students who failed to maximise the opportunity available to them. The different attitudes towards mentoring that manifested as a mentoring culture in some departments were particularly pertinent to its successful implementation. The findings point to the key role of academic staff in the mentoring programme who model the mentoring relationship in their interaction with student mentors. While their involvement in the programme may be perceived as a drain on resources in an already demanding academic teaching environment, it is imperative that structural changes be put in place for the programme to be both efficient and sustainable. A pervasive finding concerns the evolving institutional culture of student development in the faculty. Mentors and first-year students alike alluded to the potential of the mentoring programme provided it is seriously endorsed at both the departmental and faculty level. The findings provide a foundation from which to develop the programme further and to begin improving its capacity for maximizing student retention in South African higher education.Keywords: engineering students, first-year students, peer mentoring
Procedia PDF Downloads 2521623 Investigation of Aerodynamic and Design Features of Twisting Tall Buildings
Authors: Sinan Bilgen, Bekir Ozer Ay, Nilay Sezer Uzol
Abstract:
After decades of conventional shapes, irregular forms with complex geometries are getting more popular for form generation of tall buildings all over the world. This trend has recently brought out diverse building forms such as twisting tall buildings. This study investigates both the aerodynamic and design features of twisting tall buildings through comparative analyses. Since twisting a tall building give rise to additional complexities related with the form and structural system, lateral load effects become of greater importance on these buildings. The aim of this study is to analyze the inherent characteristics of these iconic forms by comparing the wind loads on twisting tall buildings with those on their prismatic twins. Through a case study research, aerodynamic analyses of an existing twisting tall building and its prismatic counterpart were performed and the results have been compared. The prismatic twin of the original building were generated by removing the progressive rotation of its floors with the same plan area and story height. Performance-based measures under investigation have been evaluated in conjunction with the architectural design. Aerodynamic effects have been analyzed by both wind tunnel tests and computational methods. High frequency base balance tests and pressure measurements on 3D models were performed to evaluate wind load effects on a global and local scale. Comparisons of flat and real surface models were conducted to further evaluate the effects of the twisting form without façade texture contribution. Comparisons highlighted that, the twisting form under investigation shows better aerodynamic behavior both for along wind but particularly for across wind direction. Compared to the prismatic counterpart; twisting model is superior on reducing vortex-shedding dynamic response by disorganizing the wind vortices. Consequently, despite the difficulties arisen from inherent complexity of twisted forms, they could still be feasible and viable with their attractive images in the realm of tall buildings.Keywords: aerodynamic tests, motivation for twisting, tall buildings, twisted forms, wind excitation
Procedia PDF Downloads 2321622 Phytochemical Investigation, Leaf Structure and Antimicrobial Screening of Pistacia lentiscus against Multi-Drug Resistant Bacteria
Authors: S. Mamoucha, N.Tsafantakis, T. Ioannidis, S. Chatzipanagiotou, C. Nikolaou, L. Skaltsounis, N. Fokialakis, N. Christodoulakis
Abstract:
Introduction: Pistacia lentiscus L. (well known as Mastic tree) is an evergreen sclerophyllous shrub that extensively thrives in the eastern Mediterranean area yet only the trees cultivated in the southern region of the Greek island Chios produces mastic resin. Different parts of P. lentiscus L. var. chia have been used in folk medicine for various purposes, such as tonic, aphrodisiac, antiseptic, antihypertensive and management of dental, gastrointestinal, liver, urinary, and respiratory tract disorders. Several studies have focused on the antibacterial activity of its resin (gum) and its essential oil. However, there is no study combining anatomy of the plant organs, phytochemical profile, and antibacterial screening of the plant. In our attempt to discover novel bioactive metabolites from the mastic tree, we screened its antibacterial activity not only against ATCC strains but also against clinical, resistant strains. Materials-methods: Leaves were investigated using Transmission (ΤΕΜ) and Scanning Εlectron Microscopy (SEM). Histochemical tests were performed on fresh and fixed tissue. Extracts prepared from dried, powdered leaves using 3 different solvents (DCM, MeOH and H2O) the waste water obtained after a hydrodistillation process for essential oil production were screened for their phytochemical content and antibacterial activity. Μetabolite profiling of polar and non-polar extracts was recorded by GC-MS and LC-HRMS techniques and analyzed using in-house and commercial libraries. The antibacterial screening was performed against Staphylococcus aureus ATCC25923, Escherichia coli ATCC25922, Pseudomonas aeruginosa ATCC27853 and against clinical, resistant strains Methicillin-resistant S. aureus (MRSA), Carbapenem-Resistant Metallo-β-Lactamase (carbapenemase) P. aeruginosa (VIM), Klebsiella pneumoniae carbapenemases (KPCs) and Acinetobacter baumanii resistant strains. The antibacterial activity was tested by the Kirby Bauer and the Agar Well Diffusion method. The zone of inhibition (ZI) of each extract was measured and compared with those of common antibiotics. Results: Leaf is compact with inosclereids and numerous idioblasts containing a globular, spiny crystal. The major nerves of the leaf contain a resin duct. Mesophyll cells showed accumulation of osmiophillic metabolites. Histochemical treatments defined secondary metabolites in subcellular localization. The phytochemical investigation revealed the presence of a large number of secondary metabolites, belonging to different chemical groups, such as terpenoids, phenolic compounds (mainly myricetin, kaempferol and quercetin glycosides), phenolic, and fatty acids. Among the extracts, the hydrostillation wastewater achieved the best results against most of the bacteria tested. MRSA, VIM and A. baumanii were inhibited. Conclusion: Extracts from plants have recently been of great interest with respect to their antimicrobial activity. Their use emerged from a growing tendency to replace synthetic antimicrobial agents with natural ones. Leaves of P. lentiscus L. var. chia showed a high antimicrobial activity even against drug - resistant bacteria. Future prospects concern the better understanding of mode of action of the antibacterial activity, the isolation of the most bioactive constituents and the clarification if the activity is related to a single compound or to the synergistic effect of several ones.Keywords: antibacterial screening, leaf anatomy, phytochemical profile, Pistacia lentiscus var. chia
Procedia PDF Downloads 2731621 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy
Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu
Abstract:
Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR
Procedia PDF Downloads 661620 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow
Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam
Abstract:
Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety
Procedia PDF Downloads 3011619 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity
Authors: Shivdayal Patel, Suhail Ahmad
Abstract:
Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling
Procedia PDF Downloads 2771618 Design and Analysis for a 4-Stage Crash Energy Management System for Railway Vehicles
Authors: Ziwen Fang, Jianran Wang, Hongtao Liu, Weiguo Kong, Kefei Wang, Qi Luo, Haifeng Hong
Abstract:
A 4-stage crash energy management (CEM) system for subway rail vehicles used by Massachusetts Bay Transportation Authority (MBTA) in the USA is developed in this paper. The 4 stages of this new CEM system include 1) energy absorbing coupler (draft gear and shear bolts), 2) primary energy absorbers (aluminum honeycomb structured box), 3) secondary energy absorbers (crush tube), and 4) collision post and corner post. A sliding anti-climber and a fixed anti-climber are designed at the front of the vehicle cooperating with the 4-stage CEM to maximize the energy to be absorbed and minimize the damage to passengers and crews. In order to investigate the effectiveness of this CEM system, both finite element (FE) methods and crashworthiness test have been employed. The whole vehicle consists of 3 married pairs, i.e., six cars. In the FE approach, full-scale railway car models are developed and different collision cases such as a single moving car impacting a rigid wall, two moving cars into a rigid wall, two moving cars into two stationary cars, six moving cars into six stationary cars and so on are investigated. The FE analysis results show that the railway vehicle incorporating this CEM system has a superior crashworthiness performance. In the crashworthiness test, a simplified vehicle front end including the sliding anti-climber, the fixed anti-climber, the primary energy absorbers, the secondary energy absorber, the collision post and the corner post is built and impacted to a rigid wall. The same test model is also analyzed in the FE and the results such as crushing force, stress, and strain of critical components, acceleration and velocity curves are compared and studied. FE results show very good comparison to the test results.Keywords: railway vehicle collision, crash energy management design, finite element method, crashworthiness test
Procedia PDF Downloads 4011617 Production of Bioethanol from Oil PalmTrunk by Cocktail Carbohydrases Enzyme Produced by Thermophilic Bacteria Isolated from Hot spring in West Sumatera, Indonesia
Authors: Yetti Marlida, Syukri Arif, Nadirman Haska
Abstract:
Recently, alcohol fuels have been produced on industrial scales by fermentation of sugars derived from wheat, corn, sugar beets, sugar cane etc. The enzymatic hydrolysis of cellulosic materials to produce fermentable sugars has an enormous potential in meeting global bioenergy demand through the biorefinery concept, since agri-food processes generate millions of tones of waste each year (Xeros and Christakopoulos 2009) such as sugar cane baggase , wheat straw, rice straw, corn cob, and oil palm trunk. In fact oil palm trunk is one of the most abundant lignocellulosic wastes by-products worldwide especially come from Malaysia, Indonesia and Nigeria and provides an alternative substrate to produce useful chemicals such as bioethanol. Usually, from the ages 3 years to 25 years, is the economical life of oil palm and after that, it is cut for replantation. The size of trunk usually is 15-18 meters in length and 46-60 centimeters in diameter. The trunk after cutting is agricultural waste causing problem in elimination but due to the trunk contains about 42% cellulose, 34.4%hemicellulose, 17.1% lignin and 7.3% other compounds,these agricultural wastes could make value added products (Pumiput, 2006).This research was production of bioethanol from oil palm trunk via saccharafication by cocktail carbohydrases enzymes. Enzymatic saccharification of acid treated oil palm trunk was carried out in reaction mixture containing 40 g treated oil palm trunk in 200 ml 0.1 M citrate buffer pH 4.8 with 500 unit/kg amylase for treatment A: Treatment B: Treatment A + 500 unit/kg cellulose; C: treatment B + 500 unit/kgg xylanase: D: treatment D + 500 unit/kg ligninase and E: OPT without treated + 500 unit/kg amylase + 500 unit/kg cellulose + 500 unit/kg xylanase + 500 unit/kg ligninase. The reaction mixture was incubated on a water bath rotary shaker adjusted to 600C and 75 rpm. The samples were withdraw at intervals 12 and 24, 36, 48,60, and 72 hr. For bioethanol production in biofermentor of 5L the hydrolysis product were inoculated a loop of Saccharomyces cerevisiae and then incubated at 34 0C under static conditions. Samples are withdraw after 12, 24, 36, 48 and 72 hr for bioethanol and residual glucose. The results of the enzymatic hidrolysis (Figure1) showed that the treatment B (OPT hydrolyzed with amylase and cellulase) have optimum condition for glucose production, where was both of enzymes can be degraded OPT perfectly. The same results also reported by Primarini et al., (2012) reported the optimum conditions the hydrolysis of OPT was at concentration of 25% (w /v) with 0.3% (w/v) amylase, 0.6% (w /v) glucoamylase and 4% (w/v) cellulase. In the Figure 2 showed that optimum bioethanol produced at 48 hr after incubation,if time increased the biothanol decreased. According Roukas (1996), a decrease in the concentration of ethanol occur at excess glucose as substrate and product inhibition effects. Substrate concentration is too high reduces the amount of dissolved oxygen, although in very small amounts, oxygen is still needed in the fermentation by Saccaromyces cerevisiae to keep life in high cell concentrations (Nowak 2000, Tao et al. 2005). The results of the research can be conluded that the optimum enzymatic hydrolysis occured when the OPT added with amylase and cellulase and optimum bioethanol produced at 48 hr incubation using Saccharomyses cerevicea whereas 18.08 % bioethanol produced from glucose conversion. This work was funded by Directorate General of Higher Education (DGHE), Ministry of Education and Culture, contract no.245/SP2H/DIT.LimtabMas/II/2013Keywords: oil palm trunk, enzymatic hydrolysis, saccharification
Procedia PDF Downloads 5131616 A New Co(II) Metal Complex Template with 4-dimethylaminopyridine Organic Cation: Structural, Hirshfeld Surface, Phase Transition, Electrical Study and Dielectric Behavior
Authors: Mohamed dammak
Abstract:
Great attention has been paid to the design and synthesis of novel organic-inorganic compounds in recent decades because of their structural variety and the large diversity of atomic arrangements. In this work, the structure for the novel dimethyl aminopyridine tetrachlorocobaltate (C₇H₁₁N₂)₂CoCl₄ prepared by the slow evaporation method at room temperature has been successfully discussed. The X-ray diffraction results indicate that the hybrid material has a triclinic structure with a P space group and features a 0D structure containing isolated distorted [CoCl₄]2- tetrahedra interposed between [C7H11N²⁻]+ cations forming planes perpendicular to the c axis at z = 0 and z = ½. The effect of the synthesis conditions and the reactants used, the interactions between the cationic planes, and the isolated [CoCl4]2- tetrahedra are employing N-H...Cl and C-H…Cl hydrogen bonding contacts. The inspection of the Hirshfeld surface analysis helps to discuss the strength of hydrogen bonds and to quantify the inter-contacts. A phase transition was discovered by thermal analysis at 390 K, and comprehensive dielectric research was reported, showing a good agreement with thermal data. Impedance spectroscopy measurements were used to study the electrical and dielectric characteristics over a wide range of frequencies and temperatures, 40 Hz–10 MHz and 313–483 K, respectively. The Nyquist plot (Z" versus Z') from the complex impedance spectrum revealed semicircular arcs described by a Cole-Cole model. An electrical circuit consisting of a link of grain and grain boundary elements is employed. The real and imaginary parts of dielectric permittivity, as well as tg(δ) of (C₇H₁₁N₂)₂CoCl₄ at different frequencies, reveal a distribution of relaxation times. The presence of grain and grain boundaries is confirmed by the modulus investigations. Electric and dielectric analyses highlight the good protonic conduction of this material.Keywords: organic-inorganic, phase transitions, complex impedance, protonic conduction, dielectric analysis
Procedia PDF Downloads 841615 Contrasted Mean and Median Models in Egyptian Stock Markets
Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid
Abstract:
Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming
Procedia PDF Downloads 3131614 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 3121613 Physical Activity Self-Efficacy among Pregnant Women with High Risk for Gestational Diabetes Mellitus: A Cross-Sectional Study
Authors: Xiao Yang, Ji Zhang, Yingli Song, Hui Huang, Jing Zhang, Yan Wang, Rongrong Han, Zhixuan Xiang, Lu Chen, Lingling Gao
Abstract:
Aim and Objectives: To examine physical activity self-efficacy, identify its predictors, and further explore the mechanism of action among the predictors in mainland Chinese pregnant women with high risk for gestational diabetes mellitus (GDM). Background: Physical activity could protect pregnant women from developing GDM. Physical activity self-efficacy was the key predictor of physical activity. Design: A cross-sectional study was conducted from October 2021 to May 2022 in Zhengzhou, China. Methods: 252 eligible pregnant women completed the Pregnancy Physical Activity Self-efficacy Scale, the Social Support for Physical Activity Scale, the Knowledge on Physical Activity Questionnaire, the 7-item Generalized Anxiety Disorder scale, the Edinburgh Postnatal Depression Scale, and a socio-demographic data sheet. Multiple linear regression was applied to explore the predictors of physical activity self-efficacy. Structural equation modeling was used to explore the mechanism of action among the predictors. Results: Chinese pregnant women with a high risk for GDM reported a moderate level of physical activity self-efficacy. The best-fit regression analysis revealed four variables explained 17.5% of the variance in physical activity self-efficacy. Social support for physical activity was the strongest predictor, followed by knowledge of the physical activity, intention to do physical activity, and anxiety symptoms. The model analysis indicated that knowledge of physical activity could release anxiety and depressive symptoms and then increase physical activity self-efficacy. Conclusion: The present study revealed a moderate level of physical activity self-efficacy. Interventions targeting pregnant women with high risk for GDM need to include the predictors of physical activity self-efficacy. Relevance to clinical practice: To facilitate pregnant women with high risk for GDM to engage in physical activity, healthcare professionals may find assess physical activity self-efficacy and intervene as soon as possible on their first antenatal visit. Physical activity intervention programs focused on self-efficacy may be conducted in further research.Keywords: physical activity, gestational diabetes, self-efficacy, predictors
Procedia PDF Downloads 991612 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil
Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins
Abstract:
Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.Keywords: biomonitoring, exposure, PBPK modelling, toxic elements
Procedia PDF Downloads 3191611 Microfluidic Device for Real-Time Electrical Impedance Measurements of Biological Cells
Authors: Anil Koklu, Amin Mansoorifar, Ali Beskok
Abstract:
Dielectric spectroscopy (DS) is a noninvasive, label free technique for a long term real-time measurements of the impedance spectra of biological cells. DS enables characterization of cellular dielectric properties such as membrane capacitance and cytoplasmic conductivity. We have developed a lab-on-a-chip device that uses an electro-activated microwells array for loading, DS measurements, and unloading of biological cells. We utilized from dielectrophoresis (DEP) to capture target cells inside the wells and release them after DS measurement. DEP is a label-free technique that exploits differences among dielectric properties of the particles. In detail, DEP is the motion of polarizable particles suspended in an ionic solution and subjected to a spatially non-uniform external electric field. To the best of our knowledge, this is the first microfluidic chip that combines DEP and DS to analyze biological cells using electro-activated wells. Device performance is tested using two different cell lines of prostate cancer cells (RV122, PC-3). Impedance measurements were conducted at 0.2 V in the 10 kHz to 40 MHz range with 6 s time resolution. An equivalent circuit model was developed to extract the cell membrane capacitance and cell cytoplasmic conductivity from the impedance spectra. We report the time course of the variations in dielectric properties of PC-3 and RV122 cells suspended in low conductivity medium (LCB), which enhances dielectrophoretic and impedance responses, and their response to sudden pH change from a pH of 7.3 to a pH of 5.8. It is shown that microfluidic chip allowed online measurements of dielectric properties of prostate cancer cells and the assessment of the cellular level variations under external stimuli such as different buffer conductivity and pH. Based on these data, we intend to deploy the current device for single cell measurements by fabricating separately addressable N × N electrode platforms. Such a device will allow time-dependent dielectric response measurements for individual cells with the ability of selectively releasing them using negative-DEP and pressure driven flow.Keywords: microfluidic, microfabrication, lab on a chip, AC electrokinetics, dielectric spectroscopy
Procedia PDF Downloads 1491610 Relevance of the Judgements Given by the International Court of Justice with Regard to South China Sea Vis-A-Vis Marshall Islands
Authors: Hitakshi Mahendru, Advait Tambe, Simran Chandok, Niharika Sanadhya
Abstract:
After the Second World War had come to an end, the Founding Fathers of the United Nations recognized a need for a supreme peacekeeping mechanism to act as a mediator between nations and moderate disputes that might blow up, if left unchecked. It has been more than seven decades since the establishment of the International Court of Justice (ICJ). When it was created, there were certain aim and objectives that the ICJ was intended to achieve. However, in today’s world, with change in political dynamics and international relations between countries, the ICJ has not succeeded in achieving several of these objectives. The ICJ is the only body in the international scenario that has the authority to regulate disputes between countries. However, in recent times, with countries like China disregarding the importance of the ICJ, there is no hope for the ICJ to command respect from other nations, thereby sending ICJ on a slow, yet steady path towards redundancy. The authority of the judgements given by the International Court of Justice, which is one of the main pillars of the United Nations, is questionable due to the forthcoming reactions from various countries on public platforms. The ICJ’s principal role within the United Nations framework is to settle peacefully international/bilateral disputes between the states that come under its jurisdiction and in accordance with the principles laid down in international law. By shedding light on the public backlash from the Chinese Government to the recent South China Sea judgement, we see the decreasing relevance of the ICJ in the contemporary world scenario. Philippines and China have wrangled over territory in the South China Sea for centuries but after the recent judgement the tension has reached an all-time high with China threatening to prosecute anybody as trespassers while continuing to militarise the disputed area. This paper will deal with the South China Sea judgement and the manner in which it has been received by the Chinese Government. Also, it will look into the consequences of counter-back. The authors will also look into the Marshall Island matter and propose a model judgement, in accordance with the principles of international law that would be the most suited for the given situation. Also, the authors will propose amendments in the working of the Security Council to ensure that the Marshal Island judgement is passed and accepted by the countries without any contempt.Keywords: International Court of Justice, international law, Marshall Islands, South China Sea, United Nations Charter
Procedia PDF Downloads 2961609 A Dissipative Particle Dynamics Study of a Capsule in Microfluidic Intracellular Delivery System
Authors: Nishanthi N. S., Srikanth Vedantam
Abstract:
Intracellular delivery of materials has always proved to be a challenge in research and therapeutic applications. Usually, vector-based methods, such as liposomes and polymeric materials, and physical methods, such as electroporation and sonoporation have been used for introducing nucleic acids or proteins. Reliance on exogenous materials, toxicity, off-target effects was the short-comings of these methods. Microinjection was an alternative process which addressed the above drawbacks. However, its low throughput had hindered its adoption widely. Mechanical deformation of cells by squeezing them through constriction channel can cause the temporary development of pores that would facilitate non-targeted diffusion of materials. Advantages of this method include high efficiency in intracellular delivery, a wide choice of materials, improved viability and high throughput. This cell squeezing process can be studied deeper by employing simple models and efficient computational procedures. In our current work, we present a finite sized dissipative particle dynamics (FDPD) model to simulate the dynamics of the cell flowing through a constricted channel. The cell is modeled as a capsule with FDPD particles connected through a spring network to represent the membrane. The total energy of the capsule is associated with linear and radial springs in addition to constraint of the fixed area. By performing detailed simulations, we studied the strain on the membrane of the capsule for channels with varying constriction heights. The strain on the capsule membrane was found to be similar though the constriction heights vary. When strain on the membrane was correlated to the development of pores, we found higher porosity in capsule flowing in wider channel. This is due to localization of strain to a smaller region in the narrow constriction channel. But the residence time of the capsule increased as the channel constriction narrowed indicating that strain for an increased time will cause less cell viability.Keywords: capsule, cell squeezing, dissipative particle dynamics, intracellular delivery, microfluidics, numerical simulations
Procedia PDF Downloads 1391608 Assessing the Impacts of Riparian Land Use on Gully Development and Sediment Load: A Case Study of Nzhelele River Valley, Limpopo Province, South Africa
Authors: B. Mavhuru, N. S. Nethengwe
Abstract:
Human activities on land degradation have triggered several environmental problems especially in rural areas that are underdeveloped. The main aim of this study is to analyze the contribution of different land uses to gully development and sediment load on the Nzhelele River Valley in the Limpopo Province. Data was collected using different methods such as observation, field data techniques and experiments. Satellite digital images, topographic maps, aerial photographs and the sediment load static model also assisted in determining how land use affects gully development and sediment load. For data analysis, the researcher used the following methods: Analysis of Variance (ANOVA), descriptive statistics, Pearson correlation coefficient and statistical correlation methods. The results of the research illustrate that high land use activities create negative changes especially in areas that are highly fragile and vulnerable. Distinct impact on land use change was observed within settlement area (9.6 %) within a period of 5 years. High correlation between soil organic matter and soil moisture (R=0.96) was observed. Furthermore, a significant variation (p ≤ 0.6) between the soil organic matter and soil moisture was also observed. A very significant variation (p ≤ 0.003) was observed in bulk density and extreme significant variations (p ≤ 0.0001) were observed in organic matter and soil particle size. The sand mining and agricultural activities has contributed significantly to the amount of sediment load in the Nzhelele River. A high significant amount of total suspended sediment (55.3 %) and bed load (53.8 %) was observed within the agricultural area. The connection which associates the development of gullies to various land use activities determines the amount of sediment load. These results are consistent with other previous research and suggest that land use activities are likely to exacerbate the development of gullies and sediment load in the Nzhelele River Valley.Keywords: drainage basin, geomorphological processes, gully development, land degradation, riparian land use and sediment load
Procedia PDF Downloads 3051607 The Effect Of The Base Computer Method On Repetitive Behaviors And Communication Skills
Abstract:
Introduction: This study investigates the efficacy of computer-based interventions for children with Autism Spectrum Disorder , specifically targeting communication deficits and repetitive behaviors. The research evaluates novel software applications designed to enhance narrative capabilities and sensory integration through structured, progressive intervention protocols Method: The study evaluated two intervention software programs designed for children with autism, focusing on narrative speech and sensory integration. Twelve children aged 5-11 participated in the two-month intervention, attending three 45-minute weekly sessions, with pre- and post-tests measuring speech, communication, and behavioral outcomes. The narrative speech software incorporated 14 stories using the Cohen model. It progressively reduced software assistance as children improved their storytelling abilities, ultimately enabling independent narration. The process involved story comprehension questions and guided story completion exercises. The sensory integration software featured approximately 100 exercises progressing from basic classification to complex cognitive tasks. The program included attention exercises, auditory memory training (advancing from single to four-syllable words), problem-solving, decision-making, reasoning, working memory, and emotion recognition activities. Each module was accompanied by frequency and pitch-adjusted music that child enjoys it to enhance learning through multiple sensory channels (visual, auditory, and tactile). Conclusion: The results indicated that the use of these software programs significantly improved communication and narrative speech scores in children, while also reducing scores related to repetitive behaviors. Findings: These findings highlight the positive impact of computer-based interventions on enhancing communication skills and reducing repetitive behaviors in children with autism.Keywords: autism, communication_skills, repetitive_behaviors, sensory_integration
Procedia PDF Downloads 01606 Optimization of Alkali Assisted Microwave Pretreatments of Sorghum Straw for Efficient Bioethanol Production
Authors: Bahiru Tsegaye, Chandrajit Balomajumder, Partha Roy
Abstract:
The limited supply and related negative environmental consequence of fossil fuels are driving researcher for finding sustainable sources of energy. Lignocellulose biomass like sorghum straw is considered as among cheap, renewable and abundantly available sources of energy. However, lignocellulose biomass conversion to bioenergy like bioethanol is hindered due to the reluctant nature of lignin in the biomass. Therefore, removal of lignin is a vital step for lignocellulose conversion to renewable energy. The aim of this study is to optimize microwave pretreatment conditions using design expert software to remove lignin and to release maximum possible polysaccharides from sorghum straw for efficient hydrolysis and fermentation process. Sodium hydroxide concentration between 0.5-1.5%, v/v, pretreatment time from 5-25 minutes and pretreatment temperature from 120-2000C were considered to depolymerize sorghum straw. The effect of pretreatment was studied by analyzing the compositional changes before and after pretreatments following renewable energy laboratory procedure. Analysis of variance (ANOVA) was used to test the significance of the model used for optimization. About 32.8%-48.27% of hemicellulose solubilization, 53% -82.62% of cellulose release, and 49.25% to 78.29% lignin solubilization were observed during microwave pretreatment. Pretreatment for 10 minutes with alkali concentration of 1.5% and temperature of 1400C released maximum cellulose and lignin. At this optimal condition, maximum of 82.62% of cellulose release and 78.29% of lignin removal was achieved. Sorghum straw at optimal pretreatment condition was subjected to enzymatic hydrolysis and fermentation. The efficiency of hydrolysis was measured by analyzing reducing sugars by 3, 5 dinitrisylicylic acid method. Reducing sugars of about 619 mg/g of sorghum straw were obtained after enzymatic hydrolysis. This study showed a significant amount of lignin removal and cellulose release at optimal condition. This enhances the yield of reducing sugars as well as ethanol yield. The study demonstrates the potential of microwave pretreatments for enhancing bioethanol yield from sorghum straw.Keywords: cellulose, hydrolysis, lignocellulose, optimization
Procedia PDF Downloads 2691605 Corporate Sustainability Practices in Asian Countries: Pattern of Disclosure and Impact on Financial Performance
Authors: Santi Gopal Maji, R. A. J. Syngkon
Abstract:
The changing attitude of the corporate enterprises from maximizing economic benefit to corporate sustainability after the publication of Brundtland Report has attracted the interest of researchers to investigate the sustainability practices of firms and its impact on financial performance. To enrich the empirical literature in Asian context, this study examines the disclosure pattern of corporate sustainability and the influence of sustainability reporting on financial performance of firms from four Asian countries (Japan, South Korea, India and Indonesia) that are publishing sustainability report continuously from 2009 to 2016. The study has used content analysis technique based on Global Reporting Framework (3 and 3.1) reporting framework to compute the disclosure score of corporate sustainability and its components. While dichotomous coding system has been employed to compute overall quantitative disclosure score, a four-point scale has been used to access the quality of the disclosure. For analysing the disclosure pattern of corporate sustainability, box plot has been used. Further, Pearson chi-square test has been used to examine whether there is any difference in the proportion of disclosure between the countries. Finally, quantile regression model has been employed to examine the influence of corporate sustainability reporting on the difference locations of the conditional distribution of firm performance. The findings of the study indicate that Japan has occupied first position in terms of disclosure of sustainability information followed by South Korea and India. In case of Indonesia, the quality of disclosure score is considerably less as compared to other three countries. Further, the gap between the quality and quantity of disclosure score is comparatively less in Japan and South Korea as compared to India and Indonesia. The same is evident in respect of the components of sustainability. The results of quantile regression indicate that a positive impact of corporate sustainability becomes stronger at upper quantiles in case of Japan and South Korea. But the study fails to extricate any definite pattern on the impact of corporate sustainability disclosure on the financial performance of firms from Indonesia and India.Keywords: corporate sustainability, quality and quantity of disclosure, content analysis, quantile regression, Asian countries
Procedia PDF Downloads 1931604 The Emerging Multi-Species Trap Fishery in the Red Sea Waters of Saudi Arabia
Authors: Nabeel M. Alikunhi, Zenon B. Batang, Aymen Charef, Abdulaziz M. Al-Suwailem
Abstract:
Saudi Arabia has a long history of using traps as a traditional fishing gear for catching commercially important demersal, mainly coral reef-associated fish species. Fish traps constitute the dominant small-scale fisheries in Saudi waters of Arabian Gulf (eastern seaboard of Saudi Arabia). Recently, however, traps have been increasingly used along the Saudi Red Sea coast (western seaboard), with a coastline of 1800 km (71%) compared to only 720 km (29%) in the Saudi Gulf region. The production trend for traps indicates a recent increase in catches and percent contribution to traditional fishery landings, thus ascertaining the rapid proliferation of trap fishing along the Saudi Red Sea coast. Reef-associated fish species, mainly groupers (Serranidae), emperors (Lethrinidae), parrotfishes (Scaridae), scads and trevallies (Carangidae), and snappers (Lutjanidae), dominate the trap catches, reflecting the reef-dominated shelf zone in the Red Sea. This ongoing investigation covers following major objectives (i) Baseline studies to characterize trap fishery through landing site visit and interview surveys (ii) Stock assessment by fisheries and biological data obtained through monthly landing site monitoring using fishery operational model by FLBEIA, (iii) Operational impacts, derelict traps assessment and by-catch analysis through bottom-mounted video camera and onboard monitoring (iv) Elucidation of fishing grounds and derelict traps impacts by onboard monitoring, Remotely Operated underwater Vehicle and Autonomous Underwater Vehicle surveys; and (v) Analysis of gear design and operations which covers colonization and deterioration experiments. The progress of this investigation on the impacts of the trap fishery on fish stocks and the marine environment in the Saudi Red Sea region is presented.Keywords: red sea, Saudi Arabia, fish trap, stock assessment, environmental impacts
Procedia PDF Downloads 3471603 Specification and Unification of All Fundamental Forces Exist in Universe in the Theoretical Perspective – The Universal Mechanics
Authors: Surendra Mund
Abstract:
At the beginning, the physical entity force was defined mathematically by Sir Isaac Newton in his Principia Mathematica as F ⃗=(dp ⃗)/dt in form of his second law of motion. Newton also defines his Universal law of Gravitational force exist in same outstanding book, but at the end of 20th century and beginning of 21st century, we have tried a lot to specify and unify four or five Fundamental forces or Interaction exist in universe, but we failed every time. Usually, Gravity creates problems in this unification every single time, but in my previous papers and presentations, I defined and derived Field and force equations for Gravitational like Interactions for each and every kind of central systems. This force is named as Variational Force by me, and this force is generated by variation in the scalar field density around the body. In this particular paper, at first, I am specifying which type of Interactions are Fundamental in Universal sense (or in all type of central systems or bodies predicted by my N-time Inflationary Model of Universe) and then unify them in Universal framework (defined and derived by me as Universal Mechanics in a separate paper) as well. This will also be valid in Universal dynamical sense which includes inflations and deflations of universe, central system relativity, Universal relativity, ϕ-ψ transformation and transformation of spin, physical perception principle, Generalized Fundamental Dynamical Law and many other important Generalized Principles of Generalized Quantum Mechanics (GQM) and Central System Theory (CST). So, In this article, at first, I am Generalizing some Fundamental Principles, and then Unifying Variational Forces (General form of Gravitation like Interactions) and Flow Generated Force (General form of EM like Interactions), and then Unify all Fundamental Forces by specifying Weak and Strong Interactions in form of more basic terms - Variational, Flow Generated and Transformational Interactions.Keywords: Central System Force, Disturbance Force, Flow Generated Forces, Generalized Nuclear Force, Generalized Weak Interactions, Generalized EM-Like Interactions, Imbalance Force, Spin Generated Forces, Transformation Generated Force, Unified Force, Universal Mechanics, Uniform And Non-Uniform Variational Interactions, Variational Interactions
Procedia PDF Downloads 501602 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 156