Search results for: cosmopolitan city comparison
1348 Systematic Analysis of Logistics Location Search Methods under Aspects of Sustainability
Authors: Markus Pajones, Theresa Steiner, Matthias Neubauer
Abstract:
Selecting a logistics location is vital for logistics providers, food retailing and other trading companies since the selection poses an essential factor for economic success. Therefore various location search methods like cost-benefit analysis and others are well known and under usage. The development of a logistics location can be related to considerable negative effects for the eco system such as sealing the surface, wrecking of biodiversity or CO2 and noise emissions generated by freight and commuting traffic. The increasing importance of sustainability demands for taking an informed decision when selecting a logistics location for the future. Sustainability considers economic, ecologic and social aspects which should be equally integrated in the process of location search. Objectives of this paper are to define various methods which support the selection of sustainable logistics locations and to generate knowledge about the suitability, assets and limitations of the methods within the selection process. This paper investigates the role of economical, ecological and social aspects when searching for new logistics locations. Thereby, related work targeted towards location search is analyzed with respect to encoded sustainability aspects. In addition, this research aims to gain knowledge on how to include aspects of sustainability and take an informed decision when searching for a logistics location. As a result, a decomposition of the various location search methods in there components leads to a comparative analysis in form of a matrix. The comparison within a matrix enables a transparent overview about the mentioned assets and limitations of the methods and their suitability for selecting sustainable logistics locations. A further result is to generate knowledge on how to combine the separate methods to a new method for a more efficient selection of logistics locations in the context of sustainability. Future work will especially investigate the above mentioned combination of various location search methods. The objective is to develop an innovative instrument, which supports the search for logistics locations with a focus on a balanced sustainability (economy, ecology, social). Because of an ideal selection of logistics locations, induced traffic should be reduced and a mode shift to rail and public transport should be facilitated.Keywords: commuting traffic, freight traffic, logistics location search, location search method
Procedia PDF Downloads 3211347 Building Resilience to El Nino Related Flood Events in Northern Peru Using a Structured Facilitation Approach to Interdisciplinary Problem Solving
Authors: Roger M. Wall, David G. Proverbs, Yamina Silva, Danny Scipion
Abstract:
This paper critically reviews the outcomes of a 4 day workshop focused on building resilience to El Niño related Flood Events in northern Perú. The workshop was run jointly by Birmingham City University (BCU) in partnership with Instituto Geofísico del Perú (IGP) and was hosted by the Universidad de Piura (UDEP). The event took place in August 2018 and was funded by the Newton-Paulet fund administered by the British Council. The workshop was a response to the severe flooding experienced in Piura during the El Niño event of March 2017 which damaged over 100,000 homes and destroyed much local infrastructure including around 100 bridges. El Niño is a recurrent event and there is concern that its frequency and intensity may change in the future as a consequence of climate change. A group of 40 early career researchers and practitioners from the UK and Perú were challenged with working together across disciplines to identify key cross-cutting themes and make recommendations for building resilience to similar future events. Key themes identified on day 1 of the workshop were governance; communities; risk information; river management; urban planning; health; and infrastructure. A field study visit took place on day 2 so that attendees could gain first-hand experience of affected and displaced communities. Each of the themes was then investigated in depth on day 3 by small interdisciplinary teams drawing on their own expertise, local knowledge and the experiences of the previous day’s field trip. Teams were responsible for developing frameworks for analysis of their chosen theme and presenting their findings to the whole group. At this point, teams worked together to develop links between the different themes so that an integrated approach could be developed and presented on day 4. This paper describes the approaches taken by each team and the way in which these were integrated to form an holistic picture of the whole system. The findings highlighted the importance of risk-related information and the need for strong governance structures to enforce planning regulations and development. The structured facilitation approach proved to be very effective and it is recommended that the process be repeated with a broader group of stakeholders from across the region.Keywords: El Niño, integrated flood risk management, Perú, structured facilitation, systems approach, resilience
Procedia PDF Downloads 1471346 Comparison of Extended Kalman Filter and Unscented Kalman Filter for Autonomous Orbit Determination of Lagrangian Navigation Constellation
Authors: Youtao Gao, Bingyu Jin, Tanran Zhao, Bo Xu
Abstract:
The history of satellite navigation can be dated back to the 1960s. From the U.S. Transit system and the Russian Tsikada system to the modern Global Positioning System (GPS) and the Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS), performance of satellite navigation has been greatly improved. Nowadays, the navigation accuracy and coverage of these existing systems have already fully fulfilled the requirement of near-Earth users, but these systems are still beyond the reach of deep space targets. Due to the renewed interest in space exploration, a novel high-precision satellite navigation system is becoming even more important. The increasing demand for such a deep space navigation system has contributed to the emergence of a variety of new constellation architectures, such as the Lunar Global Positioning System. Apart from a Walker constellation which is similar to the one adopted by GPS on Earth, a novel constellation architecture which consists of libration point satellites in the Earth-Moon system is also available to construct the lunar navigation system, which can be called accordingly, the libration point satellite navigation system. The concept of using Earth-Moon libration point satellites for lunar navigation was first proposed by Farquhar and then followed by many other researchers. Moreover, due to the special characteristics of Libration point orbits, an autonomous orbit determination technique, which is called ‘Liaison navigation’, can be adopted by the libration point satellites. Using only scalar satellite-to-satellite tracking data, both the orbits of the user and libration point satellites can be determined autonomously. In this way, the extensive Earth-based tracking measurement can be eliminated, and an autonomous satellite navigation system can be developed for future space exploration missions. The method of state estimate is an unnegligible factor which impacts on the orbit determination accuracy besides type of orbit, initial state accuracy and measurement accuracy. We apply the extended Kalman filter(EKF) and the unscented Kalman filter(UKF) to determinate the orbits of Lagrangian navigation satellites. The autonomous orbit determination errors are compared. The simulation results illustrate that UKF can improve the accuracy and z-axis convergence to some extent.Keywords: extended Kalman filter, autonomous orbit determination, unscented Kalman filter, navigation constellation
Procedia PDF Downloads 2851345 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 1621344 Using Arts in ESL Classroom
Authors: Nazia Shehzad
Abstract:
Language and art can supplement and correlate each other. Through the ages art has been a means of visual expression used to convey a wide series of incarnated ideas. Art can take the perceiver into different times and into different worlds. It can also be used to introduce different levels of vocabulary to the learners of a second language. Learning a second language for most students is a very difficult and strenuous experience. They are not only trying to accommodate to a new language but are also trying to adjust to themselves and a new environment. They are anxious about almost everything, but they are especially self-conscious about their performance in the classroom. By relocating the focus from the student to an object, everyone participates, thus waiving a certain degree of self-consciousness. The experience, a student has with art in the classroom has to be gratifying for both the student and the teacher. If the atmosphere in the classroom is too grave it will not serve any useful purpose. Art is an excellent way to teach English and encourage collaboration and interaction between students of all ages. As making art involves many different processes, it is wonderful for classification and following/giving instructions. It is also an effective way to achieve and implement language of characterization and comparison and vocabulary acquirement for the elements of design (shape, size, color, texture, tone etc.) is so much more entertaining if done in a practical and hands-on way. Expressing ideas and feelings through art is also of immeasurable value where students are at the beginning stages of English language acquisition and for many of my Saudi students it was a form of therapy. It is also a way to respect, search, examine and share the cultural traditions of different cultures, and of the students themselves. Art not only provides a field for ideas to keep aimless, meandering minds of students' busy but is also a productive tool to analyze English language in a new order. As an ESL teacher, using art is a highly compelling way to bridge the gap between student and teacher. It’s difficult to keep students concentrated, especially when they speak a different language. To get students to actually learn and explore something in your foreign language lesson, artwork is your best friend. Many teachers feel that through amalgamation of the arts into their academic lessons students are able to learn more profoundly because they use diverse ways of thinking and problem solving. Teachers observe that drawing often retains students who might otherwise be dispassionate and can help students move ahead simple recall when they are asked to make connections and come up with an exclusive interpretation through an artwork or drawing. Students use observation skills when they are drawing, and this can help to persuade students who might otherwise remain silent or need more time to process information.Keywords: amalgamation of arts, expressing ideas and feelings through arts, effective way to achieve and implement language, language and art can supplement and correlate each other
Procedia PDF Downloads 3591343 Methodical Approach for the Integration of a Digital Factory Twin into the Industry 4.0 Processes
Authors: R. Hellmuth
Abstract:
The orientation of flexibility and adaptability with regard to factory planning is at machine and process level. Factory buildings are not the focus of current research. Factory planning has the task of designing products, plants, processes, organization, areas and the construction of a factory. The adaptability of a factory can be divided into three types: spatial, organizational and technical adaptability. Spatial adaptability indicates the ability to expand and reduce the size of a factory. Here, the area-related breathing capacity plays the essential role. It mainly concerns the factory site, the plant layout and the production layout. The organizational ability to change enables the change and adaptation of organizational structures and processes. This includes structural and process organization as well as logistical processes and principles. New and reconfigurable operating resources, processes and factory buildings are referred to as technical adaptability. These three types of adaptability can be regarded independently of each other as undirected potentials of different characteristics. If there is a need for change, the types of changeability in the change process are combined to form a directed, complementary variable that makes change possible. When planning adaptability, importance must be attached to a balance between the types of adaptability. The vision of the intelligent factory building and the 'Internet of Things' presupposes the comprehensive digitalization of the spatial and technical environment. Through connectivity, the factory building must be empowered to support a company's value creation process by providing media such as light, electricity, heat, refrigeration, etc. In the future, communication with the surrounding factory building will take place on a digital or automated basis. In the area of industry 4.0, the function of the building envelope belongs to secondary or even tertiary processes, but these processes must also be included in the communication cycle. An integrative view of a continuous communication of primary, secondary and tertiary processes is currently not yet available and is being developed with the aid of methods in this research work. A comparison of the digital twin from the point of view of production and the factory building will be developed. Subsequently, a tool will be elaborated to classify digital twins from the perspective of data, degree of visualization, and the trades. Thus a contribution is made to better integrate the secondary and tertiary processes in a factory into the added value.Keywords: adaptability, digital factory twin, factory planning, industry 4.0
Procedia PDF Downloads 1561342 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis
Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc
Abstract:
Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation
Procedia PDF Downloads 2151341 Effects of Intracerebroventricular Injection of Ghrelin and Aerobic Exercise on Passive Avoidance Memory and Anxiety in Adult Male Wistar Rats
Authors: Mohaya Farzin, Parvin Babaei, Mohammad Rostampour
Abstract:
Ghrelin plays a considerable role in important neurological effects related to food intake and energy homeostasis. As was found, regular physical activity may make available significant improvements to cognitive functions in various behavioral situations. Anxiety is one of the main concerns of the modern world, affecting millions of individuals’ health. There are contradictory results regarding ghrelin's effects on anxiety-like behavior, and the plasma level of this peptide is increased during physical activity. Here we aimed to evaluate the coincident effects of exogenous ghrelin and aerobic exercise on anxiety-like behavior and passive avoidance memory in Wistar rats. Forty-five male Wistar rats (250 ± 20 g) were divided into 9 groups (n=5) and received intra-hippocampal injections of 3.0 nmol ghrelin and performed aerobic exercise training for 8 weeks. Control groups received the same volume of saline and diazepam as negative and positive control groups, respectively. Learning and memory were estimated using a shuttle box apparatus, and anxiety-like behavior was recorded by an elevated plus-maze test (EPM). Data were analyzed by ANOVA test, and p<0.05 was considered significant. Our findings showed that the combined effect of ghrelin and aerobic exercise improves the acquisition, consolidation, and retrieval of passive avoidance memory in Wistar rats. Furthermore, it is supposed that the ghrelin receiving group spent less time in open arms and fewer open arms entries compared with the control group (p<0.05). However, exercising Wistar rats spent more time in the open arm zone in comparison with the control group (p<0.05). The exercise + Ghrelin administration established reduced anxiety (p<0.05). The results of this study demonstrate that aerobic exercise contributes to an increase in the endogenous production of ghrelin, and physical activity alleviates anxiety-related behaviors induced by intra-hippocampal injection of ghrelin. In general, exercise and ghrelin can reduce anxiety and improve memory.Keywords: anxiety, ghrelin, aerobic exercise, learning, passive avoidance memory
Procedia PDF Downloads 1191340 Health Risk Assessment of Exposing to Benzene in Office Building around a Chemical Industry Based on Numerical Simulation
Authors: Majid Bayatian, Mohammadreza Ashouri
Abstract:
Releasing hazardous chemicals is one of the major problems for office buildings in the chemical industry and, therefore, environmental risks are inherent to these environments. The adverse health effects of the airborne concentration of benzene have been a matter of significant concern, especially in oil refineries. The chronic and acute adverse health effects caused by benzene exposure have attracted wide attention. Acute exposure to benzene through inhalation could cause headaches, dizziness, drowsiness, and irritation of the skin. Chronic exposures have reported causing aplastic anemia and leukemia at the occupational settings. Association between chronic occupational exposure to benzene and the development of aplastic anemia and leukemia were documented by several epidemiological studies. Numerous research works have investigated benzene emissions and determined benzene concentration at different locations of the refinery plant and stated considerable health risks. The high cost of industrial control measures requires justification through lifetime health risk assessment of exposed workers and the public. In the present study, a Computational Fluid Dynamics (CFD) model has been proposed to assess the exposure risk of office building around a refinery due to its release of benzene. For simulation, GAMBIT, FLUENT, and CFD Post software were used as pre-processor, processor, and post-processor, and the model was validated based on comparison with experimental results of benzene concentration and wind speed. Model validation results showed that the model is highly validated, and this model can be used for health risk assessment. The simulation and risk assessment results showed that benzene could be dispersion to an office building nearby, and the exposure risk has been unacceptable. According to the results of this study, a validated CFD model, could be very useful for decision-makers for control measures and possibly support them for emergency planning of probable accidents. Also, this model can be used to assess exposure to various types of accidents as well as other pollutants such as toluene, xylene, and ethylbenzene in different atmospheric conditions.Keywords: health risk assessment, office building, Benzene, numerical simulation, CFD
Procedia PDF Downloads 1301339 The Effect of Tele Rehabilitation Training on Complications of Hip Osteoarthritis: A Quasi-Experimental Study
Authors: Mahnaz Seyedoshohadaee, Azadeh Nematolahi, Parsa Rahimi
Abstract:
Introduction: Rehabilitation training after hip joint surgery is one of the priorities of nursing, which can be helpful in today's world with the advancement of technology. This study was conducted with the aim of the effect of Tele rehabilitation Education on outcomes of hip osteoarthritis. Methods: The present study was a semi-experimental study that was conducted on patients after hip replacement in the first half of 2023. To perform the work, 70 patients who were available were included in the study and were divided into two intervention and control groups by a nonrandom method. Inclusion criteria included: a maximum of 6 months had passed since the hip joint replacement, age between 30-70 years, the ability to follow instructions by the subject, the absence of accompanying orthopedic lesions such as fractures, and having access to the Internet, a smartphone, and the Skype program. Exclusion criteria were severe speech disorder and non-participation in a training session. The research tool included a demographic profile form and Hip disability and osteoarthritis outcome score (HOOS), which were completed by the patients before and after the training. Training for people in the intervention group in 4 sessions, including introduction of the disease, risk factors, symptoms, management of disease symptoms, medication, diet, appropriate exercises and pain relief methods, one session per week for 30 to 45 minutes in the groups 4 to 6 people were offered through Skype software. SPSS version 22 statistical software was used to analyze the data. Results: The average score of osteoarthritis outcomes in the patients before the intervention was 112.74±29.64 in the test group and 110.41±16.34 in the control group, which had no significant difference (P=0.682). After the intervention, it reached 85.25±21.43 and 109.94±15.74, respectively, and this difference was significant (P<0.001). The comparison of the average scores of osteoarthritis results in the test group indicated a significant difference from the pre-test to the post-test time (p<0.001). But in the control group, this difference was not significant (p=0.130). Conclusion: The results showed that Tele rehabilitation Education has a positive effect on reducing the outcomes of hip osteoarthritis, so it is recommended that nurses use Tele rehabilitation Education in their training in order to empower patients.Keywords: training, rehabilitation, hip osteoarthritides, patient, complications
Procedia PDF Downloads 11338 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh
Authors: Zahid Khalil, Saad Ul Haque, Asif Khan
Abstract:
Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).Keywords: Remote sensing, GIS, AHP, RWH
Procedia PDF Downloads 3891337 Investigation of Heat Conduction through Particulate Filled Polymer Composite
Authors: Alok Agrawal, Alok Satapathy
Abstract:
In this paper, an attempt to determine the effective thermal conductivity (keff) of particulate filled polymer composites using finite element method (FEM) a powerful computational technique is made. A commercially available finite element package ANSYS is used for this numerical analysis. Three-dimensional spheres-in-cube lattice array models are constructed to simulate the microstructures of micro-sized particulate filled polymer composites with filler content ranging from 2.35 to 26.8 vol %. Based on the temperature profiles across the composite body, the keff of each composition is estimated theoretically by FEM. Composites with similar filler contents are than fabricated using compression molding technique by reinforcing micro-sized aluminium oxide (Al2O3) in polypropylene (PP) resin. Thermal conductivities of these composite samples are measured according to the ASTM standard E-1530 by using the Unitherm™ Model 2022 tester, which operates on the double guarded heat flow principle. The experimentally measured conductivity values are compared with the numerical values and also with those obtained from existing empirical models. This comparison reveals that the FEM simulated values are found to be in reasonable good agreement with the experimental data. Values obtained from the theoretical model proposed by the authors are also found to be in even closer approximation with the measured values within percolation limit. Further, this study shows that there is gradual enhancement in the conductivity of PP resin with increase in filler percentage and thereby its heat conduction capability is improved. It is noticed that with addition of 26.8 vol % of filler, the keff of composite increases to around 6.3 times that of neat PP. This study validates the proposed model for PP-Al2O3 composite system and proves that finite element analysis can be an excellent methodology for such investigations. With such improved heat conduction ability, these composites can find potential applications in micro-electronics, printed circuit boards, encapsulations etc.Keywords: analytical modelling, effective thermal conductivity, finite element method, polymer matrix composite
Procedia PDF Downloads 3211336 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 781335 Physicochemical Properties of Pea Protein Isolate (PPI)-Starch and Soy Protein Isolate (SPI)-Starch Nanocomplexes Treated by Ultrasound at Different pH Values
Authors: Gulcin Yildiz, Hao Feng
Abstract:
Soybean proteins are the most widely used and researched proteins in the food industry. Due to soy allergies among consumers, however, alternative legume proteins having similar functional properties have been studied in recent years. These alternative proteins are also expected to have a price advantage over soy proteins. One such protein that has shown good potential for food applications is pea protein. Besides the favorable functional properties of pea protein, it also contains fewer anti-nutritional substances than soy protein. However, a comparison of the physicochemical properties of pea protein isolate (PPI)-starch nanocomplexes and soy protein isolate (SPI)-starch nanocomplexes treated by ultrasound has not been well documented. This study was undertaken to investigate the effects of ultrasound treatment on the physicochemical properties of PPI-starch and SPI-starch nanocomplexes. Pea protein isolate (85% pea protein) provided by Roquette (Geneva, IL, USA) and soy protein isolate (SPI, Pro-Fam® 955) obtained from the Archer Daniels Midland Company were adjusted to different pH levels (2-12) and treated with 5 minutes of ultrasonication (100% amplitude) to form complexes with starch. The soluble protein content was determined by the Bradford method using BSA as the standard. The turbidity of the samples was measured using a spectrophotometer (Lambda 1050 UV/VIS/NIR Spectrometer, PerkinElmer, Waltham, MA, USA). The volume-weighted mean diameters (D4, 3) of the soluble proteins were determined by dynamic light scattering (DLS). The emulsifying properties of the proteins were evaluated by the emulsion stability index (ESI) and emulsion activity index (EAI). Both the soy and pea protein isolates showed a U-shaped solubility curve as a function of pH, with a high solubility above the isoelectric point and a low one below it. Increasing the pH from 2 to 12 resulted in increased solubility for both the SPI and PPI-starch complexes. The pea nanocomplexes showed greater solubility than the soy ones. The SPI-starch nanocomplexes showed better emulsifying properties determined by the emulsion stability index (ESI) and emulsion activity index (EAI) due to SPI’s high solubility and high protein content. The PPI had similar or better emulsifying properties at certain pH values than the SPI. The ultrasound treatment significantly decreased the particle sizes of both kinds of nanocomplex. For all pH levels with both proteins, the droplet sizes were found to be lower than 300 nm. The present study clearly demonstrated that applying ultrasonication under different pH conditions significantly improved the solubility and emulsify¬ing properties of the SPI and PPI. The PPI exhibited better solubility and emulsifying properties than the SPI at certain pH levelsKeywords: emulsifying properties, pea protein isolate, soy protein isolate, ultrasonication
Procedia PDF Downloads 3191334 Domain Specificity and Language Change: Evidence South Central (Kuki-Chin) Tibeto-Burman
Authors: Mohammed Zahid Akter
Abstract:
In the studies of language change, mental factors including analogy, reanalysis, and frequency have received considerable attention as possible catalysts for language change. In comparison, relatively little is known regarding which functional domains or construction types are more amenable to these mental factors than others. In this regard, this paper will show with data from South Central (Kuki-Chin) Tibeto-Burman languages how language change interacts with certain functional domains or construction types. These construction types include transitivity, person marking, and polarity distinctions. Thus, it will be shown that transitive clauses are more prone to change than intransitive and ditransitive clauses, clauses with 1st person argument marking are more prone to change than clauses with 2nd and 3rd person argument marking, non-copular clauses are more prone to change than copular clauses, affirmative clauses are more prone to change than negative clauses, and standard negatives are more prone to change than negative imperatives. The following schematic structure can summarize these findings: transitive>intransitive, ditransitive; 1st person>2nd person, 3rd person; non-copular>copular; and affirmative>negative; and standard negative>negative imperatives. In the interest of space, here only one of these findings is illustrated: affirmative>negative. In Hyow (South Central, Bangladesh), the innovative and preverbal 1st person subject k(V)- occurs in an affirmative construction, and the archaic and postverbal 1st person subject -ŋ occurs in a negative construction. Similarly, in Purum (South Central, Northeast India), the innovative and preverbal 1st person subject k(V)- occurs in an affirmative construction, and the archaic and postverbal 1st person subject *-ŋ occurs in a negative construction. Like 1st person subject, we also see that in Anal (South Central, Northeast India), the innovative and preverbal 2nd person subject V- occurs in an affirmative construction, and the archaic and postverbal 2nd person subject -t(V) in a negative construction. To conclude, data from South Central Tibeto-Burman languages suggest that language change interacts with functional domains as some construction types are more susceptible to change than others.Keywords: functional domains, Kuki-Chin, language change, south-central, Tibeto-Burman
Procedia PDF Downloads 701333 Instrumental Characterization of Cyanobacteria as Polyhydroxybutyrate Producer
Authors: Eva Slaninova, Diana Cernayova, Zuzana Sedrlova, Katerina Mrazova, Petr Sedlacek, Jana Nebesarova, Stanislav Obruca
Abstract:
Cyanobacteria are gram-negative prokaryotes belonging to a group of photosynthetic bacteria. In comparison with heterotrophic microorganisms, cyanobacteria utilize atmospheric nitrogen and carbon dioxide without any additional substrates. This ability of these microorganisms could be employed in biotechnology for the production of bioplastics, concretely polyhydroxyalkanoates (PHAs) which are primarily accumulated as a storage material in cells in the form of intracellular granules. In this study, there two cyanobacterial cultures from genera Synechocystis were used, namely Synechocystic sp. PCC 6803 and Synechocystis salina CCALA 192. There were optimized and used several various approaches, including microscopic techniques such as cryo-scanning electron microscopy (Cryo-SEM) and transmission electron microscopy (TEM), and fluorescence lifetime imaging microscopy using Nile red as a fluorescent probe (FLIM). Due to these instrumental techniques, the morphology of intracellular space and surface of cells were characterized. The next group of methods which were employed was spectroscopic techniques such as UV-Vis spectroscopy measured in two modes (turbidimetry and integration sphere) and Fourier transform infrared spectroscopy (FTIR). All these diverse techniques were used for the detection and characterization of pigments (chlorophylls, carotenoids, phycocyanin, etc.) and PHAs, in our case poly (3-hydroxybutyrate) (P3HB). To verify results, gas chromatography (GC) was employed concretely for the determination of the amount of P3HB in biomass. Cyanobacteria were also characterized as polyhydroxybutyrate producers by flow cytometer, which could count cells and at the same time distinguish cells including P3HB and without due to fluorescent probe called BODIPY and live/dead fluorescent probe SYTO Blue. Based on results, P3HB content in cyanobacteria cells was determined, as also the overall fitness of the cells. Acknowledgment: Funding: This study was partly funded by the projectGA19-29651L of the Czech Science Foundation (GACR) and partly funded by the Austrian Science Fund (FWF), project I 4082-B25.Keywords: cyanobacteria, fluorescent probe, microscopic techniques, poly(3hydroxybutyrate), spectroscopy, chromatography
Procedia PDF Downloads 2291332 Comparison of Propofol versus Ketamine-Propofol Combination as an Anesthetic Agent in Supratentorial Tumors: A Randomized Controlled Study
Authors: Jakkireddy Sravani
Abstract:
Introduction: The maintenance of hemodynamic stability is of pivotal importance in supratentorial surgeries. Anesthesia for supratentorial tumors requires an understanding of localized or generalized rising ICP, regulation, and maintenance of intracerebral perfusion, and avoidance of secondary systemic ischemic insults. We aimed to compare the effects of the combination of ketamine and propofol with propofol alone when used as an induction and maintenance anesthetic agent during supratentorial tumors. Methodology: This prospective, randomized, double-blinded controlled study was conducted at AIIMS Raipur after obtaining the institute Ethics Committee approval (1212/IEC-AIIMSRPR/2022 dated 15/10/2022), CTRI/2023/01/049298 registration and written informed consent. Fifty-two supratentorial tumor patients posted for craniotomy and excision were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol, and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability and quality of brain relaxation were studied in both groups. Statistical analysis and technique: An MS Excel spreadsheet program was used to code and record the data. Data analysis was done using IBM Corp SPSS v23. The independent sample "t" test was applied for continuously dispersed data when two groups were compared, the chi-square test for categorical data, and the Wilcoxon test for not normally distributed data. Results: The patients were comparable in terms of demographic profile, duration of the surgery, and intraoperative input-output status. The trends in BIS over time were similar between the two groups (p-value = 1.00). Intraoperative hemodynamics (SBP, DBP, MAP) were better maintained in the ketamine and propofol combination group during induction and maintenance (p-value < 0.01). The quality of brain relaxation was comparable between the two groups (p-value = 0.364). Conclusion: Ketamine and propofol combination for the induction and maintenance of anesthesia was associated with superior hemodynamic stability, required fewer vasopressors during excision of supratentorial tumors, provided adequate brain relaxation, and some degree of neuroprotection compared to propofol alone.Keywords: supratentorial tumors, hemodynamic stability, brain relaxation, ketamine, propofol
Procedia PDF Downloads 251331 Comparing Quality of Care in Family Planning Services in Primary Public and Private Health Care Facilities in Ethiopia
Authors: Gizachew Assefa Tessema, Mohammad Afzal Mahmood, Judith Streak Gomersall, Caroline O. Laurence
Abstract:
Introduction: Improving access to quality family planning services is the key to improving health of women and children. However, there is currently little evidence on the quality and scope of family planning services provided by private facilities, and this compares to the services provided in public facilities in Ethiopia. This is important, particularly in determining whether the government should further expand the roles of the private sector in the delivery of family planning facility. Methods: This study used the 2014 Ethiopian Services Provision Assessment Plus (ESPA+) survey dataset for comparing the structural aspects of quality of care in family planning services. The present analysis used a weighted sample of 1093 primary health care facilities (955 public and 138 private). This study employed logistic regression analysis to compare key structural variables between public and private facilities. While taking the structural variables as an outcome for comparison, the facility type (public vs private) were used as the key exposure of interest. Results: When comparing availability of basic amenities (infrastructure), public facilities were less likely to have functional cell phones (AOR=0.12; 95% CI: 0.07-0.21), and water supply (AOR=0.29; 95% CI: 0.15-0.58) than private facilities. However, public facilities were more likely to have staff available 24 hours in the facility (AOR=0.12; 95% CI: 0.07-0.21), providers having family planning related training in the past 24 months (AOR=4.4; 95% CI: 2.51, 7.64) and possessing guidelines/protocols (AOR= 3.1 95% CI: 1.87, 5.24) than private facilities. Moreover, comparing the availability of equipment, public facilities had higher odds of having pelvic model for IUD demonstration (AOR=2.60; 95% CI: 1.35, 5.01) and penile model for condom demonstration (AOR=2.51; 95% CI: 1.32, 4.78) than private facilities. Conclusion: The present study suggests that Ethiopian government needs to provide emphasis towards the private sector in terms of providing family planning guidelines and training on family planning services for their staff. It is also worthwhile for the public health facilities to allocate funding for improving the availability of basic amenities. Implications for policy and/ or practice: This study calls policy makers to design appropriate strategies in providing opportunities for training a health care providers working in private health facility.Keywords: quality of care, family planning, public-private, Ethiopia
Procedia PDF Downloads 3531330 Room Temperature Ionic Liquids Filled Mixed Matrix Membranes for CO2 Separation
Authors: Asim Laeeq Khan, Mazhar Amjad Gilani, Tayub Raza
Abstract:
The use of fossil fuels for energy generation leads to the emission of greenhouse gases particularly CO2 into the atmosphere. To date, several techniques have been proposed for the efficient removal of CO2 from flue gas mixtures. Membrane technology is a promising choice due to its several inherent advantages such as low capital cost, high energy efficiency, and low ecological footprint. One of the goals in the development of membranes is to achieve high permeability and selectivity. Mixed matrix membranes comprising of inorganic fillers embedded in polymer matrix are a class of membranes that have showed improved separation properties. One of the biggest challenges in the commercialization if mixed matrix membranes are the removal of non-selective voids existing at the polymer-filler interface. In this work, mixed matrix membranes were prepared using polysulfone as polymer matrix and ordered mesoporous MCM-41 as filler materials. A new approach to removing the interfacial voids was developed by introducing room temperature ionic (RTIL) at the polymer-filler interface. The results showed that the imidazolium based RTIL not only provided wettability characteristics but also helped in further improving the separation properties. The removal of interfacial voids and good contact between polymer and filler was verified by SEM measurement. The synthesized membranes were tested in a custom built gas permeation set-up for the measurement of gas permeability and ideal gas selectivity. The results showed that the mixed matrix membranes showed significantly higher CO2 permeability in comparison to the pristine membrane. In order to have further insight into the role of fillers, diffusion and solubility measurements were carried out. The results showed that the presence of highly porous fillers resulted in increasing the diffusion coefficient while the solubility showed a slight drop. The RTIL filled membranes showed higher CO2/CH4 and CO2/N2 selectivity than unfilled membranes while the permeability dropped slightly. The increase in selectivity was due to the highly selective RTIL used in this work. The study revealed that RTIL filled mixed matrix membranes are an interesting candidate for gas separation membranes.Keywords: ionic liquids, CO2 separation, membranes, mixed matrix membranes
Procedia PDF Downloads 4791329 Study and Simulation of a Sever Dust Storm over West and South West of Iran
Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi
Abstract:
In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem
Procedia PDF Downloads 2711328 Assessing Solid Waste Management Practices in Port Harcourt City, Nigeria
Authors: Perpetual Onyejelem, Kenichi Matsui
Abstract:
Solid waste management is one essential area for urban administration to achieve environmental sustainability. Proper solid waste management (SWM) improves the environment by reducing diseases and increasing public health. On the other way, improper SWM practices negatively impact public health and environmental sustainability. This article evaluates SWM in Port Harcourt, Nigeria, with the goal of determining the current solid waste management practices and their health implications. This study used secondary data, which relies on existing published literature and official documents. The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement and its four-stage inclusion/exclusion criteria were utilized as part of a systematic literature review technique to locate the literature that concerns SWM practices and the implementation of solid waste management policies between 2014-2023 in PortHarcourt and its health effects from specific databases (Scopus and Google Scholar). The results found that despite the existence and implementation of the Rivers State Waste Management Policy and the formulation of the National Policy on Solid Waste Management in Port Harcourt, residents continued to dump waste in drainages. They were unaware of waste sorting and dumped waste haphazardly. This trend has persisted due to a lack of political commitment to the effective implementation and monitoring of policies and strategies and a lack of training provided to waste collectors regarding the SWM approach, which involves sorting and separating waste. In addition, inadequate remuneration for waste collectors, the absence of community participation in policy formulation, and insufficient awareness among residents regarding the 3R approach are also contributory factors. This caused the emergence of vector-borne diseases such as malaria, lassa fever, and cholera in Port Harcourt, increasing the expense of healthcare for locals, particularly low-income households. The study urges the government to prioritize protecting the health of its citizens by studying the methods other nations have taken to address the problem of solid waste management and adopting those that work best for their region. The bottom-up strategy should be used to include locals in developing solutions. However, citizens who are always the most impacted by this issue should launch initiatives to address it and put pressure on the government to assist them when they have limitations.Keywords: health effects, solid waste management practices, environmental pollution, Port-Harcourt
Procedia PDF Downloads 591327 An Experimental Study on the Influence of Brain-Break in the Classroom on the Physical Health and Academic Performance of Fourth Grade Students
Authors: Qian Mao, Xiaozan Wang, Jiarong Zhong, Xiaolin Zou
Abstract:
Introduction: As a result of the decline of students' physical health level and the increase of study pressure, students’ academic performance is not so good. Objective: This study aims to verify whether the Brain-Break intervention in the fourth-grade classroom of primary school can improve students' physical health and academic performance. Methods: According to the principle of no difference in pre-test data, students from two classes of grade four in Fuhai Road Primary School, Fushan district, Yantai city, Shandong province, were selected as experimental subjects, including 50 students in the experimental class (25 males and 25 females) and 50 students in the control class (24 males and 26 females). The content of the experiment was that the students were asked to perform a 4-minute Brain-Berak program designed by the researcher in the second class in the morning and the afternoon, and the intervention lasted for 12 weeks. In addition, the lung capacity, 50-meter run, sitting body forward bend, one-minute jumping rope and one-minute sit-ups stipulated in the national standards for physical fitness of students (revised in 2014) were selected as the indicators of physical health. The scores of Chinese, Mathematics, and English in the unified academic test of the municipal education bureau were selected as the indicators of academic performance. The independent-sample t-test was used to compare and analyze the data of each index between the two classes. The paired-sample t-test was used to compare and analyze the data of each index in the two classes. This paper presents only results with significant differences. Results: in terms of physical health, lung capacity (P=0.002, T= -2.254), one-minute rope skipping (P=0.000, T=3.043), and one-minute sit-ups (P=0.045, T=6.153) were significantly different between the experimental class and the control class. In terms of academic performance, there is a significant difference between the Chinese performance of the experimental class and the control class (P=0.009, T=4.833). Conclusion: Adding Brain-Berak intervention in the classroom can effectively improve the cardiorespiratory endurance (lung capacity), coordination (jumping rope), and abdominal strength (sit-ups) of fourth-grade students. At the same time, it can also effectively improve their Chinese performance. Therefore, it is suggested to promote micro-sports in the classroom of primary schools throughout the country so as to help students improve their physical health and academic performance.Keywords: academic performance, brain break, fourth grade, physical health
Procedia PDF Downloads 1011326 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 1391325 Including Local Economic and Anthropometric Parameters in the Design of an Stand up Wheelchair
Authors: Urrutia Fernando, López Jessica, Sánchez Carlos, San Antonio Thalía
Abstract:
Ecuador, as a signatory country of the convention of the rights of persons with disabilities (CRPD) has, in the recent years, strengthened the structures and legal framework required to protect this minority comprised of 13.2% of its total population. However, the reality is that this group has disproportionately low earnings and low educational attainment in comparison with the general population. The main struggles, to promote job placement of wheelchairs users, are environmental discrimination caused by accessibility in structures and transportation, this mainly due to the cost, for private and public entities, of performing the reasonable accommodation they require. It is widely known that product development and production is needed to support effective implementation of the CRPD and that walking and standing are the major life activities, in this context the objective of this investigation is to promote job placement of wheelchair user in the province of Tungurahua by means of the design, production and marketing of a customized stand up wheelchair. Exploratory interviews and measurements were performed in a representative sample of working age wheelchairs users that develop their disability after achieving their physical maturity and that are capable of performing professional activities with their upper limbs, this in order to detect the user’s preference and determine the local economic and anthropometric parameters to be included in the wheelchair design. The findings reveal factors that uniquely impact quality of life and development for people with a mobility disability within the context of the province, first that transportation is a big issue since public buses does not have accessibility for wheelchair users and the absence of curb cuts and the presence of trash bins over the sidewalks among other hinders an economic independent mobility, second that the proposal based in the idea of modifying the wheelchairs to make it able to overcome certain obstacles helps people in wheelchair to improve their independent living and by reducing the costs of modification for the employer could improve their chances of finding work.Keywords: anthropometrics, job placement, stand up wheelchair, user centered design
Procedia PDF Downloads 5551324 Analysis of the Treatment Hemorrhagic Stroke in Multidisciplinary City Hospital №1 Nur-Sultan
Authors: M. G. Talasbayen, N. N. Dyussenbayev, Y. D. Kali, R. A. Zholbarysov, Y. N. Duissenbayev, I. Z. Mammadinova, S. M. Nuradilov
Abstract:
Background. Hemorrhagic stroke is an acute cerebrovascular accident resulting from rupture of a cerebral vessel or increased permeability of the wall and imbibition of blood into the brain parenchyma. Arterial hypertension is a common cause of hemorrhagic stroke. Male gender and age over 55 years is a risk factor for intracerebral hemorrhage. Treatment of intracerebral hemorrhage is aimed at the primary pathophysiological link: the relief of coagulopathy and the control of arterial hypertension. Early surgical treatment can limit cerebral compression; prevent toxic effects of blood to the brain parenchyma. Despite progress in the development of neuroimaging data, the use of minimally invasive techniques, and navigation system, mortality from intracerebral hemorrhage remains high. Materials and methods. The study included 78 patients (62.82% male and 37.18% female) with a verified diagnosis of hemorrhagic stroke in the period from 2019 to 2021. The age of patients ranged from 25 to 80 years, the average age was 54.66±11.9 years. Demographic, brain CT data (localization, volume of hematomas), methods of treatment, and disease outcome were analyzed. Results. The retrospective analyze demonstrate that 78.2% of all patients underwent surgical treatment: decompressive craniectomy in 37.7%, craniotomy with hematoma evacuation in 29.5%, and hematoma draining in 24.59% cases. The study of the proportion of deaths, depending on the volume of intracerebral hemorrhage, shows that the number of deaths was higher in the group with a hematoma volume of more than 60 ml. Evaluation of the relationship between the time before surgery and mortality demonstrates that the most favorable outcome is observed during surgical treatment in the interval from 3 to 24 hours. Mortality depending on age did not reveal a significant difference between age groups. An analysis of the impact of the surgery type on mortality reveals that decompressive craniectomy with or without hematoma evacuation led to an unfavorable outcome in 73.9% of cases, while craniotomy with hematoma evacuation and drainage led to mortality only in 28.82% cases. Conclusion. Even though the multimodal approaches, the development of surgical techniques and equipment, and the selection of optimal conservative therapy, the question of determining the tactics of managing and treating hemorrhagic strokes is still controversial. Nevertheless, our experience shows that surgical intervention within 24 hours from the moment of admission and craniotomy with hematoma evacuation improves the prognosis of treatment outcomes.Keywords: hemorragic stroke, Intracerebral hemorrhage, surgical treatment, stroke mortality
Procedia PDF Downloads 1061323 Load-Deflecting Characteristics of a Fabricated Orthodontic Wire with 50.6Ni 49.4Ti Alloy Composition
Authors: Aphinan Phukaoluan, Surachai Dechkunakorn, Niwat Anuwongnukroh, Anak Khantachawana, Pongpan Kaewtathip, Julathep Kajornchaiyakul, Peerapong Tua-Ngam
Abstract:
Aims: The objectives of this study was to determine the load-deflecting characteristics of a fabricated orthodontic wire with alloy composition of 50.6% (atomic weight) Ni and 49.4% (atomic weight) Ti and to compare the results with Ormco, a commercially available pre-formed NiTi orthodontic archwire. Materials and Methods: The ingots alloys with atomic weight ratio 50.6 Ni: 49.4 Ti alloy were used in this study. Three specimens were cut to have wire dimensions of 0.016 inch x0.022 inch. For comparison, a commercially available pre-formed NiTi archwire, Ormco, with dimensions of 0.016 inch x 0.022 inch was used. Three-point bending tests were performed at the temperature 36+1 °C using a Universal Testing Machine on the newly fabricated and commercial archwires to assess the characteristics of the load-deflection curve with loading and unloading forces. The loading and unloading features at the deflection points 0.25, 0.50, 0.75. 1.0, 1.25, and 1.5 mm were compared. Descriptive statistics was used to evaluate each variables, and independent t-test at p < 0.05 was used to analyze the mean differences between the two groups. Results: The load-deflection curve of the 50.6Ni: 49.4Ti wires exhibited the characteristic features of superelasticity. The curves at the loading and unloading slope of Ormco NiTi archwire were more parallel than the newly fabricated NiTi wires. The average deflection force of the 50.6Ni: 49.4Ti wire was 304.98 g and 208.08 g for loading and unloading, respectively. Similarly, the values were 358.02 g loading and 253.98 g for unloading of Ormco NiTi archwire. The interval difference forces between each deflection points were in the range 20.40-121.38 g and 36.72-92.82 g for the loading and unloading curve of 50.6Ni: 49.4Ti wire, respectively, and 4.08-157.08 g and 14.28-90.78 g for the loading and unloading curve of commercial wire, respectively. The average deflection force of the 50.6Ni: 49.4Ti wire was less than that of Ormco NiTi archwire, which could have been due to variations in the wire dimensions. Although a greater force was required for each deflection point of loading and unloading for the 50.6Ni: 49.4Ti wire as compared to Ormco NiTi archwire, the values were still within the acceptable limits to be clinically used in orthodontic treatment. Conclusion: The 50.6Ni: 49.4Ti wires presented the characteristics of a superelastic orthodontic wire. The loading and unloading force were also suitable for orthodontic tooth movement. These results serve as a suitable foundation for further studies in the development of new orthodontic NiTi archwires.Keywords: 50.6 ni 49.4 Ti alloy wire, load deflection curve, loading and unloading force, orthodontic
Procedia PDF Downloads 3031322 Investigating English Dominance in a Chinese-English Dual Language Program: Teachers' Language Use and Investment
Authors: Peizhu Liu
Abstract:
Dual language education, also known as immersion education, differs from traditional language programs that teach a second or foreign language as a subject. Instead, dual language programs adopt a content-based approach, using both a majority language (e.g., English, in the case of the United States) and a minority language (e.g., Spanish or Chinese) as a medium of instruction to teach math, science, and social studies. By granting each language of instruction equal status, dual language education seeks to educate not only meaningfully but equitably and to foster tolerance and appreciation of diversity, making it essential for immigrants, refugees, indigenous peoples, and other marginalized students. Despite the cognitive and academic benefits of dual language education, recent literature has revealed that English is disproportionately privileged across dual language programs. Scholars have expressed concerns about the unbalanced status of majority and minority languages in dual language education, as favoring English in this context may inadvertently reaffirm its dominance and moreover fail to serve the needs of children whose primary language is not English. Through a year-long study of a Chinese-English dual language program, the extensively disproportionate use of English has also been observed by the researcher. However, despite the fact that Chinese-English dual language programs are the second-most popular program type after Spanish in the United States, this issue remains underexplored in the existing literature on Chinese-English dual language education. In fact, the number of Chinese-English dual language programs being offered in the U.S. has grown rapidly, from 8 in 1988 to 331 as of 2023. Using Norton and Darvin's investment model theory, the current study investigates teachers' language use and investment in teaching Chinese and English in a Chinese-English dual language program at an urban public school in New York City. The program caters to a significant number of minority children from working-class families. Adopting an ethnographic and discourse analytic approach, this study seeks to understand language use dynamics in the program and how micro- and macro-factors, such as students' identity construction, parents' and teachers' language ideologies, and the capital associated with each language, influence teachers' investment in teaching Chinese and English. The research will help educators and policymakers understand the obstacles that stand in the way of the goal of dual language education—that is, the creation of a more inclusive classroom, which is achieved by regarding both languages of instruction as equally valuable resources. The implications for how to balance the use of the majority and minority languages will also be discussed.Keywords: dual language education, bilingual education, language immersion education, content-based language teaching
Procedia PDF Downloads 841321 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation
Procedia PDF Downloads 4331320 Linking Milk Price and Production Costs with Greenhouse Gas Emissions of Luxembourgish Dairy Farms
Authors: Rocco Lioy, Tom Dusseldorf, Aline Lehnen, Romain Reding
Abstract:
A study concerning both the rentability and ecological performance of dairy production in Luxembourg was carried out for the years 2017, 2018 and 2019. The data of 100 dairy farms, referring to the Greenhouse gas emissions (ecology) and the profitability (economy) of dairy production, were evaluated, and the average was compared to the corresponding figures of 80 Luxembourgish dairy farms evaluated in the years 2014, 2015 and 2016. The ecological evaluation could confirm that farm efficiency (especially defined as the lowest ratio between used feedstuff and produced milk) is the key driver for significantly reducing the level of emissions in dairy farms. In both farm groups and in the two periods, the efficient farms show almost the same level of emissions per kg ECM (1,17 kg CO2-eq) in comparison with intensive farms (1,13 kg CO2-eq), and at the same time a by far lowest level of emissions related to the production surface (9,9 vs. 13,9 t CO2-eq/ha). Concerning the economic performances, it could be observed that in the years 2017, 2018 and 2019, the intensive farms (we define intensity in the first place in terms of produced milk pro ha) reached a higher profit (incomes minus costs, only consideration for subsidies) than the efficient farms (4,8 vs. 2,6 €-cent/kg ECM), in contradiction with the observation of the years 2014, 2015 and 2015 (1,5 vs. 3,7 €-cent/kg ECM). The most important reason for this divergent behavior was a change in income and cost structure in the considered periods. In the last period (2017, 2018 and 2019), the milk price was considerably higher than in the previous period, and the production costs were lower. This was of advantage for intensive farms, which produce the highest quantity of milk with a high amount of production means. In the period 2014, 2015 and 2016, with lower milk prices but comparable production costs, the advantage was with efficient farms. In conclusion, we expect that in the next future, when especially the production costs will presumably be much higher than in the last years, the profitableness of dairy farming will decrease. In this case, we assume that efficient farms will provide not only an ecologically but also an economically better performance than production-intensive farms. High milk prices and low production costs are no good incentives for carbon-smart farming.Keywords: efficiency, intensity, dairy, emissions, prices, costs
Procedia PDF Downloads 971319 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake
Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou
Abstract:
Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.Keywords: landsat 8, oligotrophic lake, remote sensing, water quality
Procedia PDF Downloads 396