Search results for: kinematic global positioning system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21988

Search results for: kinematic global positioning system

688 Dynamic-cognition of Strategic Mineral Commodities; An Empirical Assessment

Authors: Carlos Tapia Cortez, Serkan Saydam, Jeff Coulton, Claude Sammut

Abstract:

Strategic mineral commodities (SMC) both energetic and metals have long been fundamental for human beings. There is a strong and long-run relation between the mineral resources industry and society's evolution, with the provision of primary raw materials, becoming one of the most significant drivers of economic growth. Due to mineral resources’ relevance for the entire economy and society, an understanding of the SMC market behaviour to simulate price fluctuations has become crucial for governments and firms. For any human activity, SMC price fluctuations are affected by economic, geopolitical, environmental, technological and psychological issues, where cognition has a major role. Cognition is defined as the capacity to store information in memory, processing and decision making for problem-solving or human adaptation. Thus, it has a significant role in those systems that exhibit dynamic equilibrium through time, such as economic growth. Cognition allows not only understanding past behaviours and trends in SCM markets but also supports future expectations of demand/supply levels and prices, although speculations are unavoidable. Technological developments may also be defined as a cognitive system. Since the Industrial Revolution, technological developments have had a significant influence on SMC production costs and prices, likewise allowing co-integration between commodities and market locations. It suggests a close relation between structural breaks, technology and prices evolution. SCM prices forecasting have been commonly addressed by econometrics and Gaussian-probabilistic models. Econometrics models may incorporate the relationship between variables; however, they are statics that leads to an incomplete approach of prices evolution through time. Gaussian-probabilistic models may evolve through time; however, price fluctuations are addressed by the assumption of random behaviour and normal distribution which seems to be far from the real behaviour of both market and prices. Random fluctuation ignores the evolution of market events and the technical and temporal relation between variables, giving the illusion of controlled future events. Normal distribution underestimates price fluctuations by using restricted ranges, curtailing decisions making into a pre-established space. A proper understanding of SMC's price dynamics taking into account the historical-cognitive relation between economic, technological and psychological factors over time is fundamental in attempting to simulate prices. The aim of this paper is to discuss the SMC market cognition hypothesis and empirically demonstrate its dynamic-cognitive capacity. Three of the largest and traded SMC's: oil, copper and gold, will be assessed to examine the economic, technological and psychological cognition respectively.

Keywords: commodity price simulation, commodity price uncertainties, dynamic-cognition, dynamic systems

Procedia PDF Downloads 464
687 Preventing Discharge to No Fixed Address-Youth (NFA-Y)

Authors: Cheryl Forchuk, Sandra Fisman, Steve Cordes, Dan Catunto, Katherine Krakowski, Melissa Jeffrey, John D’Oria

Abstract:

The discharge of youth aged 16-25 from hospital into homelessness is a prevalent issue despite research indicating social, safety, health and economic detriments on both the individual and community. Lack of stable housing for youth discharged into homelessness results in long-term consequences, including exacerbation of health problems and costly health care service use and hospital readmission. People experiencing homelessness are four times more likely to be readmitted within one month of discharge and hospitals must spend $2,559 more per client. Finding safe housing for these individuals is imperative to their recovery and transition back to the community. People discharged from hospital to homelessness experience challenges, including poor health outcomes and increased hospital readmissions. Youth are the fastest-growing subgroup of people experiencing homelessness in Canada. The needs of youth are unique and include supports related to education, employment opportunities, and age-related service barriers. This study aims to identify the needs of youth at risk of homelessness by evaluating the efficacy of the “Preventing Discharge to No Fixed Address – Youth” (NFA-Y) program, which aims to prevent youth from being discharged from hospital into homelessness. The program connects youth aged 16-25 who are inpatients at London Health Sciences Centre and St. Joseph’s Health Care London to housing and financial support. Supports are offered through collaboration with community partners: Youth Opportunities Unlimited, Canadian Mental Health Association Elgin Middlesex, City of London Coordinated Access, Ontario Works, and Salvation Army’s Housing Stability Bank. This study was reviewed and approved by Western University’s Research Ethics Board. A series of interviews are being conducted with approximately ninety-three youth participants at three time points: baseline (pre-discharge), six, and twelve months post-discharge. Focus groups with participants, health care providers, and community partners are being conducted at three-time points. In addition, administrative data from service providers will be collected and analyzed. Since homelessness has a detrimental effect on recovery, client and community safety, and healthcare expenditure, locating safe housing for psychiatric patients has had a positive impact on treatment, rehabilitation, and the system as a whole. If successful, the findings of this project will offer safe policy alternatives for the prevention of homelessness for at-risk youth, help set them up for success in their future years, and mitigate the rise of the homeless youth population in Canada.

Keywords: youth homelessness, no-fixed address, mental health, homelessness prevention, hospital discharge

Procedia PDF Downloads 105
686 Assessment Environmental and Economic of Yerba Mate as a Feed Additive on Feedlot Lamb

Authors: Danny Alexander R. Moreno, Gustavo L. Sartorello, Yuli Andrea P. Bermudez, Richard R. Lobo, Ives Claudio S. Bueno, Augusto H. Gameiro

Abstract:

Meat production is a significant sector for Brazil's economy; however, the agricultural segment has suffered censure regarding the negative impacts on the environment, which consequently results in climate change. Therefore, it is essential the implementation of nutritional strategies that can improve the environmental performance of livestock. This research aimed to estimate the environmental impact and profitability of the use of yerba mate extract (Ilex paraguariensis) as an additive in the feeding of feedlot lamb. Thirty-six castrated male lambs (average weight of 23.90 ± 3.67 kg and average age of 75 days) were randomly assigned to four experimental diets with different levels of inclusion of yerba mate extract (0, 1, 2, and 4 %) based on dry matter. The animals were confined for fifty-three days and fed with 60:40 corn silage to concentrate ratio. As an indicator of environmental impact, the carbon footprint (CF) was measured as kg of CO₂ equivalent (CO₂-eq) per kg of body weight produced (BWP). The greenhouse gas (GHG) emissions such as methane (CH₄) generated from enteric fermentation, were calculated using the sulfur hexafluoride gas tracer (SF₆) technique; while the CH₄, nitrous oxide (N₂O - emissions generated by feces and urine), and carbon dioxide (CO₂ - emissions generated by concentrate and silage processing) were estimated using the Intergovernmental Panel on Climate Change (IPCC) methodology. To estimate profitability, the gross margin was used, which is the total revenue minus the total cost; the latter is composed of the purchase of animals and food. The boundaries of this study considered only the lamb fattening system. The enteric CH₄ emission from the lamb was the largest source of on-farm GHG emissions (47%-50%), followed by CH₄ and N₂O emissions from manure (10%-20%) and CO₂ emission from the concentrate, silage, and fossil energy (17%-5%). The treatment that generated the least environmental impact was the group with 4% of yerba mate extract (YME), which showed a 3% reduction in total GHG emissions in relation to the control (1462.5 and 1505.5 kg CO₂-eq, respectively). However, the scenario with 1% YME showed an increase in emissions of 7% compared to the control group. In relation to CF, the treatment with 4% YME had the lowest value (4.1 kg CO₂-eq/kg LW) compared with the other groups. Nevertheless, although the 4% YME inclusion scenario showed the lowest CF, the gross margin decreased by 36% compared to the control group (0% YME), due to the cost of YME as a food additive. The results showed that the extract has the potential for use in reducing GHG. However, the cost of implementing this input as a mitigation strategy increased the production cost. Therefore, it is important to develop political strategies that help reduce the acquisition costs of input that contribute to the search for the environmental and economic benefit of the livestock sector.

Keywords: meat production, natural additives, profitability, sheep

Procedia PDF Downloads 139
685 Wood as a Climate Buffer in a Supermarket

Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø

Abstract:

Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.

Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast

Procedia PDF Downloads 218
684 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 77
683 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application

Authors: A. Mihoc, K. Cater

Abstract:

On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.

Keywords: compass error, GPS, maritime navigation, mobile augmented reality

Procedia PDF Downloads 330
682 Listening to Voices: A Meaning-Focused Framework for Supporting People with Auditory Verbal Hallucinations

Authors: Amar Ghelani

Abstract:

People with auditory verbal hallucinations (AVH) who seek support from mental health services commonly report feeling unheard and invalidated in their interactions with social workers and psychiatric professionals. Current mental health training and clinical approaches have proven to be inadequate in addressing the complex nature of voice hearing. Childhood trauma is a key factor in the development of AVH and can render people more vulnerable to hearing both supportive and/or disturbing voices. Lived experiences of racism, poverty, and immigration are also associated with development of what is broadly classified as psychosis. Despite evidence affirming the influence of environmental factors on voice hearing, the Western biomedical system typically conceptualizes this experience as a symptom of genetically-based mental illnesses which requires diagnosis and treatment. Overemphasis on psychiatric medications, referrals, and directive approaches to people’s problems has shifted clinical interventions away from assessing and addressing problems directly related to AVH. The Maastricht approach offers voice hearers and mental health workers an alternative and respectful starting point for understanding and coping with voices. The approach was developed by voice hearers in partnership with mental health professionals and entails an innovative method to assess and create meaning from voice hearing and related life stressors. The objectives of the approach are to help people who hear voices: (1) understand the problems and/or people the voices may represent in their history, and (2) cope with distress and find solutions to related problems. The Maastricht approach has also been found to help voice hearers integrate emotional conflicts, reduce avoidance or fear associated with AVH, improve therapeutic relationships, and increase a sense of control over internal experiences. The proposed oral presentation will be guided by a recovery-oriented theoretical framework which suggests healing from psychological wounds occurs through social connections and community support systems. The presentation will start with a brainstorming exercise to identify participants pre-existing knowledge of the subject matter. This will lead into a literature review on the relations between trauma, intersectionality, and AVH. An overview of the Maastricht approach and review of research related to its therapeutic risks and benefits will follow. Participants will learn trauma-informed coping skills and questions which can help voice hearers make meaning from their experiences. The presentation will conclude with a review of resources and learning opportunities where participants can expand their knowledge of the Hearing Voices Movement and Maastricht approach.

Keywords: Maastricht interview, recovery, therapeutic assessment, voice hearing

Procedia PDF Downloads 115
681 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 144
680 Analysis of the Outcome of the Treatment of Osteoradionecrosis in Patients after Radiotherapy for Head and Neck Cancer

Authors: Petr Daniel Kovarik, Matt Kennedy, James Adams, Ajay Wilson, Andy Burns, Charles Kelly, Malcolm Jackson, Rahul Patil, Shahid Iqbal

Abstract:

Introduction: Osteoradionecrosis (ORN) is a recognised toxicity of radiotherapy (RT) for head and neck cancer (HNC). Existing literature lacks any generally accepted definition and staging system for this toxicity. Objective: The objective is to analyse the outcome of the surgical and nonsurgical treatments of ORN. Material and Method: Data on 2303 patients treated for HNC with radical or adjuvant RT or RT-chemotherapy from January 2010 - December 2021 were retrospectively analysed. Median follow-up to the whole group of patients was 37 months (range 0–148 months). Results: ORN developed in 185 patients (8.1%). The location of ORN was as follows; mandible=170, maxilla=10, and extra oral cavity=5. Multiple ORNs developed in 7 patients. 5 patients with extra oral cavity ORN were excluded from treatment analysis as the management is different. In 180 patients with oral cavity ORN, median follow-up was 59 months (range 5–148 months). ORN healed in 106 patients, treatment failed in 74 patients (improving=10, stable=43, and deteriorating=21). Median healing time was 14 months (range 3-86 months). Notani staging is available in 158 patients with jaw ORN with no previous surgery to the mandible (Notani class I=56, Notani class II=27, and Notani class III=76). 28 ORN (mandible=27, maxilla=1; Notani class I=23, Notani II=3, Notani III=1) healed spontaneously with a median healing time 7 months (range 3–46 months). In 20 patients, ORN developed after dental extraction, in 1 patient in the neomandible after radical surgery as a part of the primary treatment. In 7 patients, ORN developed and spontaneously healed in irradiated bone with no previous surgical/dental intervention. Radical resection of the ORN (segmentectomy, hemi-mandibulectomy with fibula flap) was performed in 43 patients (all mandible; Notani II=1, Notani III=39, Notani class was not established in 3 patients as ORN developed in the neomandible). 27 patients healed (63%); 15 patients failed (improving=2, stable=5, deteriorating=8). The median time from resection to healing was 6 months (range 2–30 months). 109 patients (mandible=100, maxilla=9; Notani I=3, Notani II=23, Notani III=35, Notani class was not established in 9 patients as ORN developed in the maxilla/neomandible) were treated conservatively using a combination of debridement, antibiotics and Pentoclo. 50 patients healed (46%) with a median healing time 14 months (range 3–70 months), 59 patients are recorded with persistent ORN (improving=8, stable=38, deteriorating=13). Out of 109 patients treated conservatively, 13 patients were treated with Pentoclo only (all mandible; Notani I=6, Notani II=3, Notani III=3, 1 patient with neomandible). In total, 8 patients healed (61.5%), treatment failed in 5 patients (stable=4, deteriorating=1). Median healing time was 14 months (range 4–24 months). Extra orally (n=5), 3 cases of ORN were in the auditory canal and 2 in mastoid. ORN healed in one patient (auditory canal after 32 months. Treatment failed in 4 patients (improving=3, stable=1). Conclusion: The outcome of the treatment of ORN remains in general, poor. Every effort should therefore be made to minimise the risk of development of this devastating toxicity.

Keywords: head and neck cancer, radiotherapy, osteoradionecrosis, treatment outcome

Procedia PDF Downloads 92
679 Empowering South African Female Farmers through Organic Lamb Production: A Cost Analysis Case Study

Authors: J. M. Geyser

Abstract:

Lamb is a popular meat throughout the world, particularly in Europe, the Middle East and Oceania. However, the conventional lamb industry faces challenges related to environmental sustainability, climate change, consumer health and dwindling profit margins. This has stimulated an increasing demand for organic lamb, as it is perceived to increase environmental sustainability, offer superior quality, taste, and nutritional value, which is appealing to farmers, including small-scale and female farmers, as it often commands a premium price. Despite its advantages, organic lamb production presents challenges, with a significant hurdle being the high production costs encompassing organic certification, lower stocking rates, higher mortality rates and marketing cost. These costs impact the profitability and competitiveness or organic lamb producers, particularly female and small-scale farmers, who often encounter additional obstacles, such as limited access to resources and markets. Therefore, this paper examines the cost of producing organic lambs and its impact on female farmers and raises the research question: “Is organic lamb production the saving grace for female and small-scale farmers?” Objectives include estimating and comparing production costs and profitability or organic lamb production with conventional lamb production, analyzing influencing factors, and assessing opportunities and challenges for female and small-scale farmers. The hypothesis states that organic lamb production can be a viable and beneficial option for female and small-scale farmers, provided that they can overcome high production costs and access premium markets. The study uses a mixed-method approach, combining qualitative and quantitative data. Qualitative data involves semi-structured interviews with ten female and small-scale farmers engaged in organic lamb production in South Africa. The interview covered topics such as farm characteristics, practices, cost components, mortality rates, income sources and empowerment indicators. Quantitative data used secondary published information and primary data from a female farmer. The research findings indicate that when a female farmer moves from conventional lamb production to organic lamb production, the cost in the first year of organic lamb production exceed those of conventional lamb production by over 100%. This is due to lower stocking rates and higher mortality rates in the organic system. However, costs start decreasing in the second year as stocking rates increase due to manure applications on grazing and lower mortality rates due to better worm resistance in the herd. In conclusion, this article sheds light on the economic dynamics of organic lamb production, particularly focusing on its impact on female farmers. To empower female farmers and to promote sustainable agricultural practices, it is imperative to understand the cost structures and profitability of organic lamb production.

Keywords: cost analysis, empowerment, female farmers, organic lamb production

Procedia PDF Downloads 75
678 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders

Authors: Christian Andrä, Luisa Zimmermann, Christina Müller

Abstract:

Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.

Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity

Procedia PDF Downloads 315
677 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique

Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham

Abstract:

Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.

Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT

Procedia PDF Downloads 190
676 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 534
675 Relationship Demise After Having Children: An Analysis of Abandonment and Nuclear Family Structure vs. Supportive Community Cultures

Authors: John W. Travis

Abstract:

There is an epidemic of couples separating after a child is born into a family, generally with the father leaving emotionally or physically in the first few years after birth. This separation creates high levels of stress for both parents, especially the primary parent, leaving her (or him) less available to the infant for healthy attachment and nurturing. The deterioration of the couple’s bond leaves parents increasingly under-resourced, and the dependent child in a compromised environment, with an increased likelihood of developing an attachment disorder. Objectives: To understand the dynamics of a couple, once the additional and extensive demands of a newborn are added to a nuclear family structure, and to identify effective ways to support all members of the family to thrive. Qualitative studies interviewed men, women, and couples after pregnancy and the early years as a family, regarding key destructive factors, as well as effective tools for the couple to retain a strong bond. In-depth analysis of a few cases, including the author’s own experience, reveal deeper insights about subtle factors, replicated in wider studies. Using a self-assessment survey, many fathers report feeling abandoned, due to the close bond of the mother-baby unit, and in turn, withdrawing themselves, leaving the mother without support and closeness to resource her for the baby. Fathers report various types of abandonment, from his partner to his mother, with whom he did not experience adequate connection as a child. The study identified a key destructive factor to be unrecognized wounding from childhood that was carried into the relationship. The study culminated in the naming of Male Postpartum Abandonment Syndrome (MPAS), describing the epidemic in industrialized cultures with the nuclear family as the primary configuration. A growing family system often collapses without a minimum number of adult caregivers per infant, approximately four per infant (3.87), which allows for proper healing and caretaking. In cases with no additional family or community beyond one or two parents, the layers of abandonment and trauma result in the deterioration of a couple’s relationship and ultimately the family structure. The solution includes engaging community in support of new families. The study identified (and recommends) specific resources to assist couples in recognizing and healing trauma and disconnection at multiple levels. Recommendations include wider awareness and availability of resources for healing childhood wounds and greater community-building efforts to support couples for the whole family to thrive.

Keywords: abandonment, attachment, community building, family and marital functioning, healing childhood wounds, infant wellness, intimacy, marital satisfaction, relationship quality, relationship satisfaction

Procedia PDF Downloads 227
674 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 222
673 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder

Authors: Kseniya Gladun

Abstract:

Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").

Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography

Procedia PDF Downloads 253
672 Hybrid versus Cemented Fixation in Total Knee Arthroplasty: Mid-Term Follow-Up

Authors: Pedro Gomes, Luís Sá Castelo, António Lopes, Marta Maio, Pedro Mota, Adélia Avelar, António Marques Dias

Abstract:

Introduction: Total Knee Arthroplasty (TKA) has contributed to improvement of patient`s quality of life, although it has been associated with some complications including component loosening and polyethylene wear. To prevent these complications various fixation techniques have been employed. Hybrid TKA with cemented tibial and cementless femoral components have shown favourable outcomes, although it still lack of consensus in the literature. Objectives: To evaluate the clinical and radiographic results of hybrid versus cemented TKA with an average 5 years follow-up and analyse the survival rates. Methods: A retrospective study of 125 TKAs performed in 92 patients at our institution, between 2006 to 2008, with a minimum follow-up of 2 years. The same prosthesis was used in all knees. Hybrid TKA fixation was performed in 96 knees, with a mean follow-up of 4,8±1,7 years (range, 2–8,3 years) and 29 TKAs received fully cemented fixation with a mean follow-up of 4,9±1,9 years (range, 2-8,3 years). Selection for hybrid fixation was nonrandomized and based on femoral component fit. The Oxford Knee Score (OKS 0-48) was evaluated for clinical assessment and Knee Society Roentgenographic Evaluation Scoring System was used for radiographic outcome. The survival rate was calculated using the Kaplan-Meier method, with failures defined as revision of either the tibial or femoral component for aseptic failures and all-causes (aseptic and infection). Analysis of survivorship data was performed using the log-rank test. SPSS (v22) was the computer program used for statistical analysis. Results: The hybrid group consisted of 72 females (75%) and 24 males (25%), with mean age 64±7 years (range, 50-78 years). The preoperative diagnosis was osteoarthritis (OA) in 94 knees (98%), rheumatoid arthritis (RA) in 1 knee (1%) and Posttraumatic arthritis (PTA) in 1 Knee (1%). The fully cemented group consisted of 23 females (79%) and 6 males (21%), with mean age 65±7 years (range, 47-78 years). The preoperative diagnosis was OA in 27 knees (93%), PTA in 2 knees (7%). The Oxford Knee Scores were similar between the 2 groups (hybrid 40,3±2,8 versus cemented 40,2±3). The percentage of radiolucencies seen on the femoral side was slightly higher in the cemented group 20,7% than the hybrid group 11,5% p0.223. In the cemented group there were significantly more Zone 4 radiolucencies compared to the hybrid group (13,8% versus 2,1% p0,026). Revisions for all causes were performed in 4 of the 96 hybrid TKAs (4,2%) and 1 of the 29 cemented TKAs (3,5%). The reason for revision was aseptic loosening in 3 hybrid TKAs and 1 of the cemented TKAs. Revision was performed for infection in 1 hybrid TKA. The hybrid group demonstrated a 7 years survival rate of 93% for all-cause failures and 94% for aseptic loosening. No significant difference in survivorship was seen between the groups for all-cause failures or aseptic failures. Conclusions: Hybrid TKA yields similar intermediate-term results and survival rates as fully cemented total knee arthroplasty and remains a viable option in knee joint replacement surgery.

Keywords: hybrid, survival rate, total knee arthroplasty, orthopaedic surgery

Procedia PDF Downloads 595
671 The Impact Of Türki̇ye’s Decision-making Mechanism On The Transformation In Türkiye-syria Relations (2002-2024)

Authors: Ibrahim Akkan

Abstract:

This study analyses the transformation of Türkiye's Syria policy between 2002 and 2024 and the impact of domestic political dynamics in this process. Since the collapse of the Ottoman Empire, Türkiye and Syria have had a tense relationship for a long time due to reasons such as border issues, water sharing, security concerns and the activities of terrorist organizations. However, the process that started with the Adana Agreement in 1998 gained momentum with the Justice and Development Party (Ak Party) coming to power in 2002 and a historical period of rapprochement began between the two countries. During this period, Türkiye adopted the concept of “zero problems with neighbors” in its foreign policy and deepened its strategic partnerships in the region. Turkish-Syrian relations also developed within this framework, the trade volume between the two countries increased and cooperation was strengthened through mutual visits and diplomatic agreements. However, the Arab Spring that started in 2011 was a sharp turning point in Turkish-Syrian relations. The harsh stance of the Bashar Assad administration against the popular uprisings in Syria caused Türkiye to take a stance against Assad and support opposition groups. This process led to the severing of diplomatic ties between the two countries and the gradual deterioration of relations until 2024. Türkiye directly intervened in the civil war in Syria after the Arab Spring and conducted military operations in northern Syria that highlighted security policies. The main purpose of this study is to examine the transformation in Türkiye's Syria policies between 2002 and 2024 and to analyze the role of domestic political dynamics in Türkiye in this transformation. The main research question of the study is how domestic political actors in Türkiye, especially decision-makers (leaders, governments, political parties), shape foreign policy. In this context, the extent to which the leadership of the Ak Party government is decisive in decision-making processes and how the impact of domestic dynamics on foreign policy emerges will be studied. In this study, how both the pressures of the international system and domestic political dynamics shape foreign policy will be analyzed using the theoretical framework of neoclassical realism. How decision-making processes are decisive in foreign policy will be examined through a case study specific to Türkiye-Syria relations. In addition, the strategic preferences of leaders such as Recep Tayyip Erdoğan and Ahmet Davutoğlu in foreign policy and how these preferences overlap with developments in domestic politics will be evaluated using the discourse analysis method. This study aims to make a new contribution to the literature by providing a comprehensive analysis of how domestic dynamics shape foreign policy in Türkiye-Syria relations.

Keywords: decision-making mechanisms, foreign policy analysis, neoclassical realism, syria, türkiye

Procedia PDF Downloads 15
670 2,7-Diazaindole as a Photophysical Probe for Excited State Hydrogen/Proton Transfer

Authors: Simran Baweja, Bhavika Kalal, Surajit Maity

Abstract:

Photoinduced tautomerization reactions have been the centre of attention among the scientific community over the past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried out on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phases. Derivatives of the above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are studies in the solution phase that suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization-time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy, i.e., fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to the S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1, whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red-shifted transition in the case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV which is significantly higher than the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in the case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red-shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronically excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in the excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.

Keywords: excited state hydrogen transfer, supersonic expansion, gas phase spectroscopy, IR-UV double resonance spectroscopy, laser induced fluorescence, photoionization efficiency spectroscopy

Procedia PDF Downloads 75
669 A Geographical Information System Supported Method for Determining Urban Transformation Areas in the Scope of Disaster Risks in Kocaeli

Authors: Tayfun Salihoğlu

Abstract:

Following the Law No: 6306 on Transformation of Disaster Risk Areas, urban transformation in Turkey found its legal basis. In the best practices all over the World, the urban transformation was shaped as part of comprehensive social programs through the discourses of renewing the economic, social and physical degraded parts of the city, producing spaces resistant to earthquakes and other possible disasters and creating a livable environment. In Turkish practice, a contradictory process is observed. In this study, it is aimed to develop a method for better understanding of the urban space in terms of disaster risks in order to constitute a basis for decisions in Kocaeli Urban Transformation Master Plan, which is being prepared by Kocaeli Metropolitan Municipality. The spatial unit used in the study is the 50x50 meter grids. In order to reflect the multidimensionality of urban transformation, three basic components that have spatial data in Kocaeli were identified. These components were named as 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings', and 'Inadequacy of Urban Services'. Each component was weighted and scored for each grid. In order to delimitate urban transformation zones Optimized Outlier Analysis (Local Moran I) in the ArcGIS 10.6.1 was conducted to test the type of distribution (clustered or scattered) and its significance on the grids by assuming the weighted total score of the grid as Input Features. As a result of this analysis, it was found that the weighted total scores were not significantly clustering at all grids in urban space. The grids which the input feature is clustered significantly were exported as the new database to use in further mappings. Total Score Map reflects the significant clusters in terms of weighted total scores of 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings' and 'Inadequacy of Urban Services'. Resulting grids with the highest scores are the most likely candidates for urban transformation in this citywide study. To categorize urban space in terms of urban transformation, Grouping Analysis in ArcGIS 10.6.1 was conducted to data that includes each component scores in significantly clustered grids. Due to Pseudo Statistics and Box Plots, 6 groups with the highest F stats were extracted. As a result of the mapping of the groups, it can be said that 6 groups can be interpreted in a more meaningful manner in relation to the urban space. The method presented in this study can be magnified due to the availability of more spatial data. By integrating with other data to be obtained during the planning process, this method can contribute to the continuation of research and decision-making processes of urban transformation master plans on a more consistent basis.

Keywords: urban transformation, GIS, disaster risk assessment, Kocaeli

Procedia PDF Downloads 120
668 Experimental Investigation of Absorbent Regeneration Techniques to Lower the Cost of Combined CO₂ and SO₂ Capture Process

Authors: Bharti Garg, Ashleigh Cousins, Pauline Pearson, Vincent Verheyen, Paul Feron

Abstract:

The presence of SO₂ in power plant flue gases makes flue gas desulfurization (FGD) an essential requirement prior to post combustion CO₂ (PCC) removal facilities. Although most of the power plants worldwide deploy FGD in order to comply with environmental regulations, generally the achieved SO₂ levels are not sufficiently low for the flue gases to enter the PCC unit. The SO₂ level in the flue gases needs to be less than 10 ppm to effectively operate the PCC installation. The existing FGD units alone cannot bring down the SO₂ levels to or below 10 ppm as required for CO₂ capture. It might require an additional scrubber along with the existing FGD unit to bring the SO₂ to the desired levels. The absence of FGD units in Australian power plants brings an additional challenge. SO₂ concentrations in Australian power station flue gas emissions are in the range of 100-600 ppm. This imposes a serious barrier on the implementation of standard PCC technologies in Australia. CSIRO’s developed CS-Cap process is a unique solution to capture SO₂ and CO₂ in a single column with single absorbent which can potentially bring cost-effectiveness to the commercial deployment of carbon capture in Australia, by removing the need for FGD. Estimated savings of removing SO₂ through a similar process as CS-Cap is around 200 MMUSD for a 500 MW Australian power plant. Pilot plant trials conducted to generate the proof of concept resulted in 100% removal of SO₂ from flue gas without utilising standard limestone-based FGD. In this work, removal of absorbed sulfur from aqueous amine absorbents generated in the pilot plant trials has been investigated by reactive crystallisation and thermal reclamation. More than 95% of the aqueous amines can be reclaimed back from the sulfur loaded absorbent via reactive crystallisation. However, the recovery of amines through thermal reclamation is limited and depends on the sulfur loading on the spent absorbent. The initial experimental work revealed that reactive crystallisation is a better fit for CS-Cap’s sulfur-rich absorbent especially when it is also capable of generating K₂SO₄ crystals of highly saleable quality ~ 99%. Initial cost estimation carried on both the technologies resulted in almost similar capital expenditure; however, the operating cost is considerably higher in thermal reclaimer than that in crystalliser. The experimental data generated in the laboratory from both the regeneration techniques have been used to generate the simulation model in Aspen Plus. The simulation model illustrates the economic benefits which could be gained by removing flue gas desulfurization prior to standard PCC unit and replacing it with a CS-Cap absorber column co-capturing CO₂ and SO₂, and it's absorbent regeneration system which would be either reactive crystallisation or thermal reclamation.

Keywords: combined capture, cost analysis, crystallisation, CS-Cap, flue gas desulfurisation, regeneration, sulfur, thermal reclamation

Procedia PDF Downloads 128
667 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 180
666 Existential Affordances and Psychopathology: A Gibsonian Analysis of Dissociative Identity Disorder

Authors: S. Alina Wang

Abstract:

A Gibsonian approach is used to understand the existential dimensions of the human ecological niche. Then, this existential-Gibsonian framework is applied to rethinking Hacking’s historical analysis of multiple personality disorder. This research culminates in a generalized account of psychiatric illness from an enactivist lens. In conclusion, reflections on the implications of this account on approaches to psychiatric treatment are mentioned. J.J. Gibson’s theory of affordances centered on affordances of sensorimotor varieties, which guide basic behaviors relative to organisms’ vital needs and physiological capacities (1979). Later theorists, notably Neisser (1988) and Rietveld (2014), expanded on the theory of affordances to account for uniquely human activities relative to the emotional, intersubjective, cultural, and narrative aspects of the human ecological niche. This research shows that these affordances are structured by what Haugeland (1998) calls existential commitments, which draws on Heidegger’s notion of dasein (1927) and Merleau-Ponty’s account of existential freedom (1945). These commitments organize the existential affordances that fill an individual’s environment and guide their thoughts, emotions, and behaviors. This system of a priori existential commitments and a posteriori affordances is called existential enactivism. For humans, affordances do not only elicit motor responses and appear as objects with instrumental significance. Affordances also, and possibly primarily, determine so-called affective and cognitive activities and structure the wide range of kinds (e.g., instrumental, aesthetic, ethical) of significances of objects found in the world. Then existential enactivism is applied to understanding the psychiatric phenomenon of multiple personality disorder (precursor of the current diagnosis of dissociative identity disorder). A reinterpretation of Hacking’s (1998) insights into the history of this particular disorder and his generalizations on the constructed nature of most psychiatric illness is taken on. Enactivist approaches sensitive to existential phenomenology can provide a deeper understanding of these matters. Conceptualizing psychiatric illness as strictly a disorder in the head (whether parsed as a disorder of brain chemicals or meaning-making capacities encoded in psychological modules) is incomplete. Rather, psychiatric illness must also be understood as a disorder in the world, or in the interconnected networks of existential affordances that regulate one’s emotional, intersubjective, and narrative capacities. All of this suggests that an adequate account of psychiatric illness must involve (1) the affordances that are the sources of existential hindrance, (2) the existential commitments structuring these affordances, and (3) the conditions of these existential commitments. Approaches to treatment of psychiatric illness would be more effective by centering on the interruption of normalized behaviors corresponding to affordances targeted as sources of hindrance, the development of new existential commitments, and the practice of new behaviors that erect affordances relative to these reformed commitments.

Keywords: affordance, enaction, phenomenology, psychiatry, psychopathology

Procedia PDF Downloads 138
665 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group

Authors: Antonio Martín-Pardo

Abstract:

With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.

Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation

Procedia PDF Downloads 120
664 Primary-Color Emitting Photon Energy Storage Nanophosphors for Developing High Contrast Latent Fingerprints

Authors: G. Swati, D. Haranath

Abstract:

Commercially available long afterglow /persistent phosphors are proprietary materials and hence the exact composition and phase responsible for their luminescent characteristics such as initial intensity and afterglow luminescence time are not known. Further to generate various emission colors, commercially available persistence phosphors are physically blended with fluorescent organic dyes such as rodhamine, kiton and methylene blue etc. Blending phosphors with organic dyes results into complete color coverage in visible spectra, however with time, such phosphors undergo thermal and photo-bleaching. This results in the loss of their true emission color. Hence, the current work is dedicated studies on inorganic based thermally and chemically stable primary color emitting nanophosphors namely SrAl2O4:Eu2+, Dy3+, (CaZn)TiO3:Pr3+, and Sr2MgSi2O7:Eu2+, Dy3+. SrAl2O4: Eu2+, Dy3+ phosphor exhibits a strong excitation in UV and visible region (280-470 nm) with a broad emission peak centered at 514 nm is the characteristic emission of parity allowed 4f65d1→4f7 transitions of Eu2+ (8S7/2→2D5/2). Sunlight excitable Sr2MgSi2O7:Eu2+,Dy3+ nanophosphors emits blue color (464 nm) with Commercial international de I’Eclairage (CIE) coordinates to be (0.15, 0.13) with a color purity of 74 % with afterglow time of > 5 hours for dark adapted human eyes. (CaZn)TiO3:Pr3+ phosphor system possess high color purity (98%) which emits intense, stable and narrow red emission at 612 nm due intra 4f transitions (1D2 → 3H4) with afterglow time of 0.5 hour. Unusual property of persistence luminescence of these nanophoshphors supersedes background effects without losing sensitive information these nanophosphors offer several advantages of visible light excitation, negligible substrate interference, high contrast bifurcation of ridge pattern, non-toxic nature revealing finger ridge details of the fingerprints. Both level 1 and level 2 features from a fingerprint can be studied which are useful for used classification, indexing, comparison and personal identification. facile methodology to extract high contrast fingerprints on non-porous and porous substrates using a chemically inert, visible light excitable, and nanosized phosphorescent label in the dark has been presented. The chemistry of non-covalent physisorption interaction between the long afterglow phosphor powder and sweat residue in fingerprints has been discussed in detail. Real-time fingerprint development on porous and non-porous substrates has also been performed. To conclude, apart from conventional dark vision applications, as prepared primary color emitting afterglow phosphors are potentional candidate for developing high contrast latent fingerprints.

Keywords: fingerprints, luminescence, persistent phosphors, rare earth

Procedia PDF Downloads 222
663 Catalytic Ammonia Decomposition: Cobalt-Molybdenum Molar Ratio Effect on Hydrogen Production

Authors: Elvis Medina, Alejandro Karelovic, Romel Jiménez

Abstract:

Catalytic ammonia decomposition represents an attractive alternative due to its high H₂ content (17.8% w/w), a product stream free of COₓ, among others; however, challenges need to be addressed for its consolidation as an H₂ chemical storage technology, especially, those focused on the synthesis of efficient bimetallic catalytic systems, as an alternative to the price and scarcity of ruthenium, the most active catalyst reported. In this sense, from the perspective of rational catalyst design, adjusting the main catalytic activity descriptor, a screening of supported catalysts with different compositional settings of cobalt-molybdenum metals is presented to evaluate their effect on the catalytic decomposition rate of ammonia. Subsequently, a kinetic study on the supported monometallic Co and Mo catalysts, as well as on the bimetallic CoMo catalyst with the highest activity is shown. The synthesis of catalysts supported on γ-alumina was carried out using the Charge Enhanced Dry Impregnation (CEDI) method, all with a 5% w/w loading metal. Seeking to maintain uniform dispersion, the catalysts were oxidized and activated (In-situ activation) using a flow of anhydrous air and hydrogen, respectively, under the same conditions: 40 ml min⁻¹ and 5 °C min⁻¹ from room temperature to 600 °C. Catalytic tests were carried out in a fixed-bed reactor, confirming the absence of transport limitations, as well as an Approach to equilibrium (< 1 x 10⁻⁴). The reaction rate on all catalysts was measured between 400 and 500 ºC at 53.09 kPa NH3. The synergy theoretically (DFT) reported for bimetallic catalysts was confirmed experimentally. Specifically, it was observed that the catalyst composed mainly of 75 mol% cobalt proved to be the most active in the experiments, followed by the monometallic cobalt and molybdenum catalysts, in this order of activity as referred to in the literature. A kinetic study was performed at 10.13 – 101.32 kPa NH3 and at four equidistant temperatures between 437 and 475 °C the data were adjusted to an LHHW-type model, which considered the desorption of nitrogen atoms from the active phase surface as the rate determining step (RDS). The regression analysis were carried out under an integral regime, using a minimization algorithm based on SLSQP. The physical meaning of the parameters adjusted in the kinetic model, such as the RDS rate constant (k₅) and the lumped adsorption constant of the quasi-equilibrated steps (α) was confirmed through their Arrhenius and Van't Hoff-type behavior (R² > 0.98), respectively. From an energetic perspective, the activation energy for cobalt, cobalt-molybdenum, and molybdenum was 115.2, 106.8, and 177.5 kJ mol⁻¹, respectively. With this evidence and considering the volcano shape described by the ammonia decomposition rate in relation to the metal composition ratio, the synergistic behavior of the system is clearly observed. However, since characterizations by XRD and TEM were inconclusive, the formation of intermetallic compounds should be still verified using HRTEM-EDS. From this point onwards, our objective is to incorporate parameters into the kinetic expressions that consider both compositional and structural elements and explore how these can maximize or influence H₂ production.

Keywords: CEDI, hydrogen carrier, LHHW, RDS

Procedia PDF Downloads 61
662 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants

Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji

Abstract:

Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.

Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization

Procedia PDF Downloads 130
661 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India

Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha

Abstract:

Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.

Keywords: 2D ERT, landslide, safety factor, slope stability

Procedia PDF Downloads 320
660 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids

Authors: Ayalew Yimam Ali

Abstract:

The Y-shaped microchannel system is used to mix up low or high viscosities of different fluids, and the laminar flow with high-viscous water-glycerol fluids makes the mixing at the entrance Y-junction region a challenging issue. Acoustic streaming (AS) is time-average, a steady second-order flow phenomenon that could produce rolling motion in the microchannel by oscillating low-frequency range acoustic transducer by inducing acoustic wave in the flow field is the promising strategy to enhance diffusion mass transfer and mixing performance in laminar flow phenomena. In this study, the 3D trapezoidal Structure has been manufactured with advanced CNC machine cutting tools to produce the molds of trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm spine sharp-edge tip depth from PMMA glass (Polymethylmethacrylate) and the microchannel has been fabricated using PDMS (Polydimethylsiloxane) which could be grown-up longitudinally in Y-junction microchannel mixing region top surface to visualized 3D rolling steady acoustic streaming and mixing performance evaluation using high-viscous miscible fluids. The 3D acoustic streaming flow patterns and mixing enhancement were investigated using the micro-particle image velocimetry (μPIV) technique with different spine depth lengths, channel widths, high volume flow rates, oscillation frequencies, and amplitude. The velocity and vorticity flow fields show that a pair of 3D counter-rotating streaming vortices were created around the trapezoidal spine structure and observing high vorticity maps up to 8 times more than the case without acoustic streaming in Y-junction with the high-viscosity water-glycerol mixture fluids. The mixing experiments were performed by using fluorescent green dye solution with de-ionized water on one inlet side, de-ionized water-glycerol with different mass-weight percentage ratios on the other inlet side of the Y-channel and evaluated its performance with the degree of mixing at different amplitudes, flow rates, frequencies, and spine sharp-tip edge angles using the grayscale value of pixel intensity with MATLAB Software. The degree of mixing (M) characterized was found to significantly improved to 0.96.8% with acoustic streaming from 67.42% without acoustic streaming, in the case of 0.0986 μl/min flow rate, 12kHz frequency and 40V oscillation amplitude at y = 2.26 mm. The results suggested the creation of a new 3D steady streaming rolling motion with a high volume flow rate around the entrance junction mixing region, which promotes the mixing of two similar high-viscosity fluids inside the microchannel, which is unable to mix by the laminar flow with low viscous conditions.

Keywords: nano fabrication, 3D acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement

Procedia PDF Downloads 34
659 Compositional Influence in the Photovoltaic Properties of Dual Ion Beam Sputtered Cu₂ZnSn(S,Se)₄ Thin Films

Authors: Brajendra S. Sengar, Vivek Garg, Gaurav Siddharth, Nisheka Anadkat, Amitesh Kumar, Shaibal Mukherjee

Abstract:

The optimal band gap (~ 1 to 1.5 eV) and high absorption coefficient ~104 cm⁻¹ has made Cu₂ZnSn(S,Se)₄ (CZTSSe) films as one of the most promising absorber materials in thin-film photovoltaics. Additionally, CZTSSe consists of elements that are abundant and non-toxic, makes it even more favourable. The CZTSSe thin films are grown at 100 to 500ᵒC substrate temperature (Tsub) on Soda lime glass (SLG) substrate by Elettrorava dual ion beam sputtering (DIBS) system by utilizing a target at 2.43x10⁻⁴ mbar working pressure with RF power of 45 W in argon ambient. The chemical composition, depth profiling, structural properties and optical properties of these CZTSSe thin films prepared on SLG were examined by energy dispersive X-ray spectroscopy (EDX, Oxford Instruments), Hiden secondary ion mass spectroscopy (SIMS) workstation with oxygen ion gun of energy up to 5 keV, X-ray diffraction (XRD) (Rigaku Cu Kα radiation, λ=.154nm) and Spectroscopic Ellipsometry (SE, M-2000D from J. A. Woollam Co., Inc). It is observed that from that, the thin films deposited at Tsub=200 and 300°C show Cu-poor and Zn-rich states (i.e., Cu/(Zn + Sn) < 1 and Zn/Sn > 1), which is not the case for films grown at other Tsub. It has been reported that the CZTSSe thin films with the highest efficiency are typically at Cu-poor and Zn-rich states. The values of band gap in the fundamental absorption region of CZTSSe are found to be in the range of 1.23-1.70 eV depending upon the Cu/(Zn+Sn) ratio. It is also observed that there is a decline in optical band gap with the increase in Cu/(Zn+Sn) ratio (evaluated from EDX measurement). Cu-poor films are found to have higher optical band gap than Cu-rich films. The decrease in the band gap with the increase in Cu content in case of CZTSSe films may be attributed to changes in the extent of p-d hybridization between Cu d-levels and (S, Se) p-levels. CZTSSe thin films with Cu/(Zn+Sn) ratio in the range 0.86–1.5 have been successfully deposited using DIBS. Optical band gap of the films is found to vary from 1.23 to 1.70 eV based on Cu/(Zn+Sn) ratio. CZTSe films with Cu/ (Zn+Sn) ratio of .86 are found to have optical band gap close to the ideal band gap (1.49 eV) for highest theoretical conversion efficiency. Thus by tailoring the value of Cu/(Zn+Sn), CZTSSe thin films with the desired band gap could be obtained. Acknowledgment: We are thankful to DIBS, EDX, and XRD facility equipped at Sophisticated Instrument Centre (SIC) at IIT Indore. The authors B. S. S and A. K. acknowledge CSIR, and V. G. acknowledges UGC, India for their fellowships. B. S. S is thankful to DST and IUSSTF for BASE Internship Award. Prof. Shaibal Mukherjee is thankful to DST and IUSSTF for BASE Fellowship and MEITY YFRF award. This work is partially supported by DAE BRNS, DST CERI, and DST-RFBR Project under India-Russia Programme of Cooperation in Science and Technology. We are thankful to Mukul Gupta for SIMS facility equipped at UGC-DAE Indore.

Keywords: CZTSSe, DIBS, EDX, solar cell

Procedia PDF Downloads 250