Search results for: minimal pair
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1239

Search results for: minimal pair

39 Discriminant Shooting-Related Statistics between Winners and Losers 2023 FIBA U19 Basketball World Cup

Authors: Navid Ebrahmi Madiseh, Sina Esfandiarpour-Broujeni, Rahil Razeghi

Abstract:

Introduction: Quantitative analysis of game-related statistical parameters is widely used to evaluate basketball performance at both individual and team levels. Non-free throw shooting plays a crucial role as the primary scoring method, holding significant importance in the game's technical aspect. It has been explored the predictive value of game-related statistics in relation to various contextual and situational variables. Many similarities and differences also have been found between different age groups and levels of competition. For instance, in the World Basketball Championships after the 2010 rule change, 2-point field goals distinguished winners from losers in women's games but not in men's games, and the impact of successful 3-point field goals on women's games was minimal. The study aimed to identify and compare discriminant shooting-related statistics between winning and losing teams in men’s and women’s FIBA-U19-Basketball-World-Cup-2023 tournaments. Method: Data from 112 observations (2 per game) of 16 teams (for each gender) in the FIBA-U19-Basketball-World-Cup-2023 were selected as samples. The data were obtained from the official FIBA website using Python. Specific information was extracted, organized into a DataFrame, and consisted of twelve variables, including shooting percentages, attempts, and scoring ratio for 3-pointers, mid-range shots, paint shots, and free throws. Made% = scoring type successful attempts/scoring type total attempts¬ (1)Free-throw-pts% (free throw score ratio) = (free throw score/total score) ×100 (2)Mid-pts% (mid-range score ratio) = (mid-range score/total score) ×100 (3) Paint-pts% (paint score ratio) = (Paint score/total score) ×100 (4) 3p_pts% (three-point score ratio) = (three-point score/total score) ×100 (5) Independent t-tests were used to examine significant differences in shooting-related statistical parameters between winning and losing teams for both genders. Statistical significance was p < 0.05. All statistical analyses were completed with SPSS, Version 18. Results: The results showed that 3p-made%, mid-pts%, paint-made%, paint-pts%, mid-attempts, and paint-attempts were significantly different between winners and losers in men (t=-3.465, P<0.05; t=3.681, P<0.05; t=-5.884, P<0.05; t=-3.007, P<0.05; t=2.549, p<0.05; t=-3.921, P<0.05). For women, significant differences between winners and losers were found for 3p-made%, 3p-pts%, paint-made%, and paint-attempt (t=-6.429, P<0.05; t=-1.993, P<0.05; t=-1.993, P<0.05; t=-4.115, P<0.05; t=02.451, P<0.05). Discussion: The research aimed to compare shooting-related statistics between winners and losers in men's and women's teams at the FIBA-U19-Basketball-World-Cup-2023. Results indicated that men's winners excelled in 3p-made%, paint-made%, paint-pts%, paint-attempts, and mid-attempt, consistent with previous studies. This study found that losers in men’s teams had higher mid-pts% than winners, which was inconsistent with previous findings. It has been indicated that winners tend to prioritize statistically efficient shots while forcing the opponent to take mid-range shots. In women's games, significant differences in 3p-made%, 3p-pts%, paint-made%, and paint-attempts were observed, indicating that winners relied on riskier outside scoring strategies. Overall, winners exhibited higher accuracy in paint and 3P shooting than losers, but they also relied more on outside offensive strategies. Additionally, winners acquired a higher ratio of their points from 3P shots, which demonstrates their confidence in their skills and willingness to take risks at this competitive level.

Keywords: gender, losers, shoot-statistic, U19, winners

Procedia PDF Downloads 67
38 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing

Authors: Arjun Kumar Rath, Titus Dhanasingh

Abstract:

Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.

Keywords: board diagnostics software, embedded system, hardware testing, test frameworks

Procedia PDF Downloads 116
37 Azolla Pinnata as Promising Source for Animal Feed in India: An Experimental Study to Evaluate the Nutrient Enhancement Result of Feed

Authors: Roshni Raha, Karthikeyan S.

Abstract:

The world's largest livestock population resides in India. Existing strategies must be modified to increase the production of livestock and their by-products in order to meet the demands of the growing human population. Even though India leads the world in both milk production and the number of cows, average production is not very healthy and productive. This may be due to the animals' poor nutrition caused by a chronic under-availability of high-quality fodder and feed. This article explores Azolla pinnata to be a promising source to produce high-quality unconventional feed and fodder for effective livestock production and good quality breeding in India. This article is an exploratory study using a literature survey and experimentation analysis. In the realm of agri-biotechnology, azolla sp gained attention for helping farmers achieve sustainability, having minimal land requirements, and serving as a feed element that doesn't compete with human food sources. It has high methionine content, which is a good source of protein. It can be easily digested as the lignin content is low. It has high antioxidants and vitamins like beta carotene, vitamin A, and vitamin B12. Using this concept, the paper aims to investigate and develop a model of using azolla plants as a novel, high-potential feed source to combat the problems of low production and poor quality of animals in India. A representative sample of animal feed is collected where azolla is added. The sample is ground into a fine powder using mortar. PITC (phenylisothiocyanate) is added to derivatize the amino acids. The sample is analyzed using HPLC (High-Performance Liquid Chromatography) to measure the amino acids and monitor the protein content of the sample feed. The amino acid measurements from HPLC are converted to milligrams per gram of protein using the method of amino acid profiling via a set of calculations. The amino acid profile data is then obtained to validate the proximate results of nutrient enhancement of the composition of azolla in the sample. Based on the proximate composition of azolla meal, the enhancement results shown were higher compared to the standard values of normal fodder supplements indicating the feed to be much richer and denser in nutrient supply. Thus azolla fed sample proved to be a promising source for animal fodder. This would in turn lead to higher production and a good breed of animals that would help to meet the economic demands of the growing Indian population. Azolla plants have no side effects and can be considered as safe and effective to be immersed in the animal feed. One area of future research could begin with the upstream scaling strategy of azolla plants in India. This could involve introducing several bioreactor types for its commercial production. Since azolla sp has been proved in this paper as a promising source for high quality animal feed and fodder, large scale production of azolla plants will help to make the process much quicker, more efficient and easily accessible. Labor expenses will also be reduced by employing bioreactors for large-scale manufacturing.

Keywords: azolla, fodder, nutrient, protein

Procedia PDF Downloads 27
36 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 163
35 A Comparative Study of Efficacy and Safety of Salicylic Acid, Trichloroacetic Acid and Glycolic Acid in Various Facial Melanosis

Authors: Shivani Dhande, Sanjiv Choudhary, Adarshlata Singh

Abstract:

Introduction: Chemical peeling is a popular, relatively inexpensive day procedure and generally safe method for treatment of pigmentary skin disorders and for skin rejuvenation. Chemical peels are classified by the depth of action into superficial, medium, and deep peels.Various facial pigmentary conditions have significant impact on quality of life causing psychological stress, necessitating its safe and effective treatment.Aim & Objectives:To compare the efficacy of Salicylic acid, Trichloroaceticacid & Glycolic Acid in facial melanosis(melasma,photomelanosis& post acne pigmentation).To study the side effects of above mentioned peeling agents. Method and Materials:It was a randomized parallel control single blind study consisting of total of 36 cases, 12 cases each of melasma, photo melanosis and post acne pigmentation within age group 20-50 years having fitzpatrick’s skin type4. Woods lamp examination was done to confirm the type of melasma.Patients with keloidal tendency, active herpes infection or past history of hypersensitivity to salicylic acid, trichloroaceticand glycolic acid as well aspatients on systemic isotretinoin were excluded.Clinical photographs at the beginning of therapy and then serially, were taken to assess the clinical response. Prior to application a written informed consent was obtained. A post auricular test peel was performed. Patients were divided into 3 groups, containing 12 patients each of melasma, photomelanosis and post acnepigmentation.All the three peels SA peel 20% (done once in 2 weeks), GA peel 50% (done once in 3 weeks) and TCA 15% (done once in 3 weeks) were used with total six settings for each patient. Before application of peel patients were counseled to wash the face with soap and water. Then face was dried and cleaned with spirit and acetone to remove all cutaneous oils. GA, TCA, SA were applied with cotton buds/gauze withmild strokes. After a contact period off 5-10mins neutralization was done with cold water. Post peel topical sunscreen application was mandatory. MASI was used pre and post treatment to assess melasma. Investigator’s global improvement scale- overall hyperpigmentation (4-significant, 3-moderate, 2-mild, 1-minimal, 0-no change ) and Patient’s satisfaction grading scale (>70%- excellent response, 50-70%- good response, <50%- average response) was used to assess improvement in all the three facial melanosis.Results:In our study of 12 patients of melasma, 4 (33.33%)patients showed excellent results;3 (25%) with GAand 1(8.33%) of TCA.Good response was seen in 4 (33.33%) patients;1(8.33%) each for GA & SA and 2(16.66%) for TCA.Poor response was seen in 4(33.33%) patients;1(8.33%) for TCA and 3 (25%) for SA.Of 12 patients of photomelanosis, excellent resultswas seen in 3(25%)patients of TCA. Good response was seen in 4 (33.33%) patients, 1(8.33%) each of TCA &SA and 2(16.66%) of GA.Poor responsewas seen in 5(41.66%) patients;3 (25%) for SA and 2(16.66%) of GA.Of 12 patients of post acne pigmentation, excellent responsein 3 (25%) patients;2(16.66%) of SA and 1(8.33%) of TCA.Good responsewas seen in 5(41.66%) patients;2(16.66%) of SA and GA and1(8.33%) of TCA.Poor response was seen in 4 (33.33%) patients; 2 (16.66%) for SA and TCA both. No major side effects in the form of scarring or persistant pigmentation was seen. Transient blackening of skin with burning sensation was seen in cases treated with TCA and SA. Post procedural itching and redness was noted with GA peel. Conclusion- In our study GA(50%),TCA(15%) & SA(20%) peels showed excellent response in melasma, photomelanosis and post-acne pigmentation respectively.All the 3 peeling agents were well tolerated without any significant side-effects in the above specified concentrations.

Keywords: facial melanosis, gycolic acid, salicylic acid, trichloroacetic acid

Procedia PDF Downloads 222
34 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 75
33 Ultra-Rapid and Efficient Immunomagnetic Separation of Listeria Monocytogenes from Complex Samples in High-Gradient Magnetic Field Using Disposable Magnetic Microfluidic Device

Authors: L. Malic, X. Zhang, D. Brassard, L. Clime, J. Daoud, C. Luebbert, V. Barrere, A. Boutin, S. Bidawid, N. Corneau, J. Farber, T. Veres

Abstract:

The incidence of infections caused by foodborne pathogens such as Listeria monocytogenes (L. monocytogenes) poses a great potential threat to public health and safety. These issues are further exacerbated by legal repercussions due to “zero tolerance” food safety standards adopted in developed countries. Unfortunately, a large number of related disease outbreaks are caused by pathogens present in extremely low counts currently undetectable by available techniques. The development of highly sensitive and rapid detection of foodborne pathogens is therefore crucial, and requires robust and efficient pre-analytical sample preparation. Immunomagnetic separation is a popular approach to sample preparation. Microfluidic chips combined with external magnets have emerged as viable high throughput methods. However, external magnets alone are not suitable for the capture of nanoparticles, as very strong magnetic fields are required. Devices that incorporate externally applied magnetic field and microstructures of a soft magnetic material have thus been used for local field amplification. Unfortunately, very complex and costly fabrication processes used for integration of soft magnetic materials in the reported proof-of-concept devices would prohibit their use as disposable tools for food and water safety or diagnostic applications. We present a sample preparation magnetic microfluidic device implemented in low-cost thermoplastic polymers using fabrication techniques suitable for mass-production. The developed magnetic capture chip (M-chip) was employed for rapid capture and release of L. monocytogenes conjugated to immunomagnetic nanoparticles (IMNs) in buffer and beef filtrate. The M-chip relies on a dense array of Nickel-coated high-aspect ratio pillars for capture with controlled magnetic field distribution and a microfluidic channel network for sample delivery, waste, wash and recovery. The developed Nickel-coating process and passivation allows generation of switchable local perturbations within the uniform magnetic field generated with a pair of permanent magnets placed at the opposite edges of the chip. This leads to strong and reversible trapping force, wherein high local magnetic field gradients allow efficient capture of IMNs conjugated to L. monocytogenes flowing through the microfluidic chamber. The experimental optimization of the M-chip was performed using commercially available magnetic microparticles and fabricated silica-coated iron-oxide nanoparticles. The fabricated nanoparticles were optimized to achieve the desired magnetic moment and surface functionalization was tailored to allow efficient capture antibody immobilization. The integration, validation and further optimization of the capture and release protocol is demonstrated using both, dead and live L. monocytogenes through fluorescence microscopy and plate- culture method. The capture efficiency of the chip was found to vary as function of listeria to nanoparticle concentration ratio. The maximum capture efficiency of 30% was obtained and the 24-hour plate-culture method allowed the detection of initial sample concentration of only 16 cfu/ml. The device was also very efficient in concentrating the sample from a 10 ml initial volume. Specifically, 280% concentration efficiency was achieved in 17 minutes only, demonstrating the suitability of the system for food safety applications. In addition, flexible design and low-cost fabrication process will allow rapid sample preparation for applications beyond food and water safety, including point-of-care diagnosis.

Keywords: array of pillars, bacteria isolation, immunomagnetic sample preparation, polymer microfluidic device

Procedia PDF Downloads 250
32 Exploring Participatory Research Approaches in Agricultural Settings: Analyzing Pathways to Enhance Innovation in Production

Authors: Michele Paleologo, Marta Acampora, Serena Barello, Guendalina Graffigna

Abstract:

Introduction: In the face of increasing demands for higher agricultural productivity with minimal environmental impact, participatory research approaches emerge as promising means to promote innovation. However, the complexities and ambiguities surrounding these approaches in both theory and practice present challenges. This Scoping Review seeks to bridge these gaps by mapping participatory approaches in agricultural contexts, analyzing their characteristics, and identifying indicators of success. Methods: Following PRISMA guidelines, we conducted a systematic Scoping Review, searching Scopus and Web of Science databases. Our review encompassed 34 projects from diverse geographical regions and farming contexts. Thematic analysis was employed to explore the types of innovation promoted and the categories of participants involved. Results: The identified innovation types encompass technological advancements, sustainable farming practices, and market integration, forming 5 main themes: climate change, cultivar, irrigation, pest and herbicide, and technical improvement. These themes represent critical areas where participatory research drives innovation to address pressing agricultural challenges. Participants were categorized as citizens, experts, NGOs, private companies, and public bodies. Understanding their roles is vital for designing effective participatory initiatives that embrace diverse stakeholders. The review also highlighted 27 theoretical frameworks underpinning participatory projects. Clearer guidelines and reporting standards are crucial for facilitating the comparison and synthesis of findings across studies, thereby enhancing the robustness of future participatory endeavors. Furthermore, we identified three main categories of barriers and facilitators: pragmatic/behavioral, emotional/relational, and cognitive. These insights underscore the significance of participant engagement and collaborative decision-making for project success beyond theoretical considerations. Regarding participation, projects were classified as contributory (5 cases), where stakeholders contributed insights; collaborative (10 cases), with active co-designing of solutions; and co-created (19 cases), featuring deep stakeholder involvement from ideation to implementation, resulting in joint ownership of outcomes. Such diverse participation modes highlight the adaptability of participatory approaches to varying agricultural contexts. Discussion: In conclusion, this Scoping Review demonstrates the potential of participatory research in driving transformative changes in farmers' practices, fostering sustainability and innovation in agriculture. Understanding the diverse landscape of participatory approaches, theoretical frameworks, and participant engagement strategies is essential for designing effective and context-specific interventions. Collaborative efforts among researchers, practitioners, and stakeholders are pivotal in harnessing the full potential of participatory approaches and driving positive change in agricultural settings worldwide. The identified themes of innovation and participation modes provide valuable insights for future research and targeted interventions in agricultural innovation.

Keywords: participatory research, co-creation, agricultural innovation, stakeholders' engagement

Procedia PDF Downloads 29
31 Impact of Six-Minute Walk or Rest Break during Extended GamePlay on Executive Function in First Person Shooter Esport Players

Authors: Joanne DiFrancisco-Donoghue, Seth E. Jenny, Peter C. Douris, Sophia Ahmad, Kyle Yuen, Hillary Gan, Kenney Abraham, Amber Sousa

Abstract:

Background: Guidelines for the maintenance of health of esports players and the cognitive changes that accompany competitive gaming are understudied. Executive functioning is an important cognitive skill for an esports player. The relationship between executive functions and physical exercise has been well established. However, the effects of prolonged sitting regardless of physical activity level have not been established. Prolonged uninterrupted sitting reduces cerebral blood flow. Reduced cerebral blood flow is associated with lower cognitive function and fatigue. This decrease in cerebral blood flow has been shown to be offset by frequent and short walking breaks. These short breaks can be as little as 2 minutes at the 30-minute mark and 6 minutes following 60 minutes of prolonged sitting. The rationale is the increase in blood flow and the positive effects this has on metabolic responses. The primary purpose of this study was to evaluate executive function changes following 6-minute bouts of walking and complete rest mid-session, compared to no break, during prolonged gameplay in competitive first-person shooter (FPS) esports players. Methods: This study was conducted virtually due to the Covid-19 pandemic and was approved by the New York Institute of Technology IRB. Twelve competitive FPS participants signed written consent to participate in this randomized pilot study. All participants held a gold ranking or higher. Participants were asked to play for 2 hours on three separate days. Outcome measures to test executive function included the Color Stroop and the Tower of London tests which were administered online each day prior to gaming and at the completion of gaming. All participants completed the tests prior to testing for familiarization. One day of testing consisted of a 6-minute walk break after 60-75 minutes of play. The Rate of Perceived Exertion (RPE) was recorded. The participant continued to play for another 60-75 minutes and completed the tests again. Another day the participants repeated the same methods replacing the 6-minute walk with lying down and resting for 6 minutes. On the last day, the participant played continuously with no break for 2 hours and repeated the outcome tests pre and post-play. A Latin square was used to randomize the treatment order. Results: Using descriptive statistics, the largest change in mean reaction time incorrect congruent pre to post play was seen following the 6-minute walk (662.0 (609.6) ms pre to 602.8 (539.2) ms post), followed by the 6-minute rest group (681.7(618.1) ms pre to 666.3 (607.9) ms post), and with minimal change in the continuous group (594.0(534.1) ms pre to 589.6(552.9) ms post). The mean solution time was fastest in the resting condition (7774.6(6302.8)ms), followed by the walk condition (7929.4 (5992.8)ms), with the continuous condition being slowest (9337.3(7228.7)ms). the continuous group 9337.3(7228.7) ms; 7929.4 (5992.8 ) ms 774.6(6302.8) ms. Conclusion: Short walking breaks improve blood flow and reduce the risk of venous thromboembolism during prolonged sitting. This pilot study demonstrated that a low intensity 6 -minute walk break, following 60 minutes of play, may also improve executive function in FPS gamers.

Keywords: executive function, FPS, physical activity, prolonged sitting

Procedia PDF Downloads 180
30 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 150
29 Developing a Sustainable Transit Planning Index Using Analytical Hierarchy Process Method for ZEB Implementation in Canada

Authors: Mona Ghafouri-Azar, Sara Diamond, Jeremy Bowes, Grace Yuan, Aimee Burnett, Michelle Wyndham-West, Sara Wagner, Anand Pariyarath

Abstract:

Transportation is the fastest growing source of greenhouse gas emissions worldwide. In Canada, it is responsible for 23% of total CO2emissions from fuel combustion, and emissions from the transportation sector are the second largest source of emissions after the oil and gas sector. Currently, most Canadian public transportation systems rely on buses that operateon fossil fuels.Canada is currently investing billions of dollars to replacediesel buses with electric busesas this isperceived to have a significant impact on climate mitigation. This paper focuses on the possible impacts of zero emission buses (ZEB) on sustainable development, considering three dimensions of sustainability; environmental quality, economic growth, and social development.A sustainable transportation system is one that is safe, affordable, accessible, efficient, and resilient and that contributes minimal emissions of carbon and other pollutants.To enable implementation of these goals, relevant indicators were selected and defined that measure progress towards a sustainable transportation system. These were drawn from Canadian and international examples. Studies compare different European cities in terms of development, sustainability, and infrastructures, by using transport performance indicators. A Normalized Transport Sustainability index measures and compares policies in different urban areas and allows fine-tuning of policies. Analysts use a number ofmethods for sustainable analysis, like cost-benefit analysis (CBA) toassess economic benefit, life-cycle assessment (LCA) to assess social, economic, and environment factors and goals, and multi-criteria decision making (MCDM) analysis which can comparediffering stakeholder preferences.A multi criteria decision making approach is an appropriate methodology to plan and evaluate sustainable transit development and to provide insights and meaningful information for decision makers and transit agencies. It is essential to develop a system thataggregates specific discrete indices to assess the sustainability of transportation systems.Theseprioritize indicators appropriate for the differentCanadian transit system agencies and theirpreferences and requirements. This studywill develop an integrating index that alliesexistingdiscrete indexes to supporta reliable comparison between the current transportation system (diesel buses) and the new ZEB system emerging in Canada. As a first step, theindexes for each category are selected, and the index matrix constructed. Second, the selected indicators arenormalized to remove anyinconsistency between them. Next, the normalized matrix isweighted based on the relative importance of each index to the main domains of sustainability using the analytical hierarchy process (AHP) method. This is accomplished through expert judgement around the relative importance of different attributes with respect to the goals through apairwise comparison matrix. The considerationof multiple environmental, economic, and social factors (including equity and health) is integrated intoa sustainable transit planning index (STPI) which supportsrealistic ZEB implementation in Canada and beyond and is useful to different stakeholders, agencies, and ministries.

Keywords: zero emission buses, sustainability, sustainable transit, transportation, analytical hierarchy process, environment, economy, social

Procedia PDF Downloads 94
28 Assessment of Natural Flood Management Potential of Sheffield Lakeland to Flood Risks Using GIS: A Case Study of Selected Farms on the Upper Don Catchment

Authors: Samuel Olajide Babawale, Jonathan Bridge

Abstract:

Natural Flood Management (NFM) is promoted as part of sustainable flood management (SFM) in response to climate change adaptation. Stakeholder engagement is central to this approach, and current trends are progressively moving towards a collaborative learning approach where stakeholder participation is perceived as one of the indicators of sustainable development. Within this methodology, participation embraces a diversity of knowledge and values underpinned by a philosophy of empowerment, equity, trust, and learning. To identify barriers to NFM uptake, there is a need for a new understanding of how stakeholder participation could be enhanced to benefit individual and community resilience within SFM. This is crucial in light of climate change threats and scientific reliability concerns. In contributing to this new understanding, this research evaluated the proposed interventions on six (6) UK NFM in a catchment known as the Sheffield Lakeland Partnership Area with reference to the Environment Agency Working with Natural Processes (WWNP) Potentials/Opportunities. Three of the opportunities, namely Run-off Attenuation Potential of 1%, Run-off Attenuation Potential of 3.3% and Riparian Woodland Potential, were modeled. In all the models, the interventions, though they have been proposed or already in place, are not in agreement with the data presented by EA WWNP. Findings show some institutional weaknesses, which are seen to inhibit the development of adequate flood management solutions locally with damaging implications for vulnerable communities. The gap in communication from practitioners poses a challenge to the implementation of real flood mitigating measures that align with the lead agency’s nationally accepted measures which are identified as not feasible by the farm management officers within this context. Findings highlight a dominant top-bottom approach to management with very minimal indication of local interactions. Current WWNP opportunities have been termed as not realistic by the people directly involved in the daily management of the farms, with less emphasis on prevention and mitigation. The targeted approach suggested by the EA WWNP is set against adaptive flood management and community development. The study explores dimensions of participation using the self-reliance and self-help approach to develop a methodology that facilitates reflections of currently institutionalized practices and the need to reshape spaces of interactions to enable empowered and meaningful participation. Stakeholder engagement and resilience planning underpin this research. The findings of the study suggest different agencies have different perspectives on “community participation”. It also shows communities in the case study area appear to be least influential, denied a real chance of discussing their situations and influencing the decisions. This is against the background that the communities are in the most productive regions, contributing massively to national food supplies. The results are discussed concerning practical implications for addressing interagency partnerships and conducting grassroots collaborations that empower local communities and seek solutions to sustainable development challenges. This study takes a critical look into the challenges and progress made locally in sustainable flood risk management and adaptation to climate change by the United Kingdom towards achieving the global 2030 agenda for sustainable development.

Keywords: natural flood management, sustainable flood management, sustainable development, working with natural processes, environment agency, run-off attenuation potential, climate change

Procedia PDF Downloads 49
27 A Comprehensive Approach to Create ‘Livable Streets’ in the Mixed Land Use of Urban Neighborhoods: A Case Study of Bangalore Street

Authors: K. C. Tanuja, Mamatha P. Raj

Abstract:

"People have always lived on streets. They have been the places where children first learned about the world, where neighbours met, the social centres of towns and cities, the rallying points for revolts, the scenes of repression. The street has always been the scene of this conflict, between living and access, between resident and traveller, between street life and the threat of death.” Livable Streets by Donald Appleyard. Urbanisation is happening rapidly all over the world. As population increasing in the urban settlements, its required to provide quality of life to all the inhabitants who live in. Urban design is a place making strategic planning. Urban design principles promote visualising any place environmentally, socially and economically viable. Urban design strategies include building mass, transit development, economic viability and sustenance and social aspects. Cities are wonderful inventions of diversity- People, things, activities, ideas and ideologies. Cities should be smarter and adjustable to present technology and intelligent system. Streets represent the community in terms of social and physical aspects. Streets are an urban form that responds to many issues and are central to urban life. Streets are for livability, safety, mobility, place of interest, economic opportunity, balancing the ecology and for mass transit. Urban streets are places where people walk, shop, meet and engage in different types of social and recreational activities which make urban community enjoyable. Streets knit the urban fabric of activities. Urban streets become livable with the introduction of social network enhancing the pedestrian character by providing good design features which in turn should achieve the minimal impact of motor vehicle use on pedestrians. Livable streets are the spatial definition to the public right of way on urban streets. Streets in India have traditionally been the public spaces where social life happened or created from ages. Streets constitute the urban public realm where people congregate, celebrate and interact. Streets are public places that can promote social interaction, active living and community identity. Streets as potential contributors to a better living environment, knitting together the urban fabric of people and places that make up a community. Livable streets or complete streets are making our streets as social places, roadways and sidewalks accessible, safe, efficient and useable for all people. The purpose of this paper is to understand the concept of livable street and parameters of livability on urban streets. Streets to be designed as the pedestrians are the main users and create spaces and furniture for social interaction which serves for the needs of the people of all ages and abilities. The problems of streets like congestion due to width of the street, traffic movement and adjacent land use and type of movement need to be redesigned and improve conditions defining the clear movement path for vehicles and pedestrians. Well-designed spatial qualities of street enhances the street environment, livability and then achieves quality of life to the pedestrians. A methodology been derived to arrive at the typologies in street design after analysis of existing situation and comparing with livable standards. It was Donald Appleyard‟s Livable Streets laid out the social effects on streets creating the social network to achieve Livable Streets.

Keywords: livable streets, social interaction, pedestrian use, urban design

Procedia PDF Downloads 119
26 Unpacking the Rise of Social Entrepreneurship over Sustainable Entrepreneurship among Sri Lankan Exporters in SMEs Sector: A Case Study in Sri Lanka

Authors: Amarasinghe Shashikala, Pramudika Hansini, Fernando Tajan, Rathnayake Piyumi

Abstract:

This study investigates the prominence of the social entrepreneurship (SE) model over the sustainable entrepreneurship model among Sri Lankan exporters in the small and medium enterprise (SME) sector. The primary objective of this study is to explore how the unique socio-economic contextual nuances of the country influence this behavior. The study employs a multiple-case study approach, collecting data from thirteen SEs in the SME sector. The findings reveal a significant alignment between SE and the lifestyle of the people in Sri Lanka, attributed largely to its deep-rooted religious setting and cultural norms. A crucial factor driving the prominence of SE is the predominantly labor-intensive nature of production processes within the exporters of the SME sector. These processes inherently lend themselves to SE, providing employment opportunities and fostering community engagement. Further, SE initiatives substantially resonate with community-centric practices, making them more appealing and accessible to the local populace. In contrast, the findings highlight a dilemma between cost-effectiveness and sustainable entrepreneurship. Transitioning to sustainable export products and production processes is demanded by foreign buyers and acknowledged as essential for environmental stewardship, which often requires capital-intensive makeovers. This investment inevitably raises the overall cost of the export product, making it less competitive in the global market. Interestingly, the study notes a disparity between international demand for sustainable products and the willingness of buyers to pay a premium for them. Despite the growing global preference for eco-friendly options, the findings suggest that the additional costs associated with sustainable entrepreneurship are not adequately reflected in the purchasing behavior of international buyers. The abundance of natural resources coupled with a minimal occurrence of natural catastrophes renders exporters less environmentally sensitive. The absence of robust policy support for environmental preservation exacerbates this inclination. Consequently, exporters exhibit a diminished motivation to incorporate environmental sustainability into their business decisions. Instead, attention is redirected towards factors such as the local population's minimum standards of living, prevalent social issues, governmental corruption and inefficiency, and rural poverty. These elements impel exporters to prioritize social well-being when making business decisions. Notably, the emphasis on social impact, rather than environmental impact, appears to be a generational trend, perpetuating a focus on societal aspects in the realm of business. In conclusion, the manifestation of entrepreneurial behavior within developing nations is notably contingent upon contextual nuances. This investigation contributes to a deeper understanding of the dynamics shaping the prevalence of SE over sustainable entrepreneurship among Sri Lankan exporters in the SME sector. The insights generated have implications for policymakers, industry stakeholders, and academics seeking to navigate the delicate balance between socio-cultural values, economic feasibility, and environmental sustainability in the pursuit of responsible business practices within the export sector.

Keywords: small and medium enterprises, social entrepreneurship, Sri Lanka, sustainable entrepreneurship

Procedia PDF Downloads 29
25 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration

Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.

Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control

Procedia PDF Downloads 219
24 Residential Building Facade Retrofit

Authors: Galit Shiff, Yael Gilad

Abstract:

The need to retrofit old buildings lies in the fact that buildings are responsible for the main energy use and CO₂ emission. Existing old structures are more dominant in their effect than new energy-efficient buildings. Nevertheless not every case of urban renewal that aims to replace old buildings with new neighbourhoods necessarily has a financial or sustainable justification. Façade design plays a vital role in the building's energy performance and the unit's comfort conditions. A retrofit façade residential methodology and feasibility applicative study has been carried out for the past four years, with two projects already fully renovated. The intention of this study is to serve as a case study for limited budget façade retrofit in Mediterranean climate urban areas. The two case study buildings are set in Israel. However, they are set in different local climatic conditions. One is in 'Sderot' in the south of the country, and one is in' Migdal Hahemek' in the north of the country. The building typology is similar. The budget of the projects is around $14,000 per unit and includes interventions at the buildings' envelope while tenants are living in. Extensive research and analysis of the existing conditions have been done. The building's components, materials and envelope sections were mapped, examined and compared to relevant updated standards. Solar radiation simulations for the buildings in their surroundings during winter and summer days were done. The energy rate of each unit, as well as the building as a whole, was calculated according to the Israeli Energy Code. The buildings’ facades were documented with the use of a thermal camera during different hours of the day. This information was superimposed with data about the electricity use and the thermal comfort that was collected from the residential units. Later in the process, similar tools were further used in order to compare the effectiveness of different design options and to evaluate the chosen solutions. Both projects showed that the most problematic units were the ones below the roof and the ones on top of the elevated entrance floor (pilotis). Old buildings tend to have poor insulation on those two horizontal surfaces which require treatment. Different radiation levels and wall sections in the two projects influenced the design strategies: In the southern project, there was an extreme difference in solar radiations levels between the main façade and the back elevation. Eventually, it was decided to invest in insulating the main south-west façade and the side façades, leaving the back north-east façade almost untouched. Lower levels of radiation in the northern project led to a different tactic: a combination of basic insulation on all façades, together with intense treatment on areas with problematic thermal behavior. While poor execution of construction details and bad installation of windows in the northern project required replacing them all, in the southern project it was found that it is more essential to shade the windows than replace them. Although the buildings and the construction typology was chosen for this study are similar, the research shows that there are large differences due to the location in different climatic zones and variation in local conditions. Therefore, in order to reach a systematic and cost-effective method of work, a more extensive catalogue database is needed. Such a catalogue will enable public housing companies in the Mediterranean climate to promote massive projects of renovating existing old buildings, drawing on minimal analysis and planning processes.

Keywords: facade, low budget, residential, retrofit

Procedia PDF Downloads 174
23 The Perspective of British Politicians on English Identity: Qualitative Study of Parliamentary Debates, Blogs, and Interviews

Authors: Victoria Crynes

Abstract:

The question of England’s role in Britain is increasingly relevant due to the ongoing rise in citizens identifying as English. Furthermore, the Brexit Referendum was predominantly supported by constituents identifying as English. Few politicians appear to comprehend how Englishness is politically manifested. Politics and the media have depicted English identity as a negative and extremist problem - an inaccurate representation that ignores the breadth of English identifying citizens. This environment prompts the question, 'How are British Politicians Addressing the Modern English Identity Question?' Parliamentary debates, political blogs, and interviews are synthesized to establish a more coherent understanding of the current political attitudes towards English identity, the perceived nature of English identity, and the political manifestation of English representation and governance. Analyzed parliamentary debates addressed the democratic structure of English governance through topics such as English votes for English laws, devolution, and the union. The blogs examined include party-based, multi-author style blogs, and independently authored blogs by politicians, which provide a dynamic and up-to-date representation of party and politician viewpoints. Lastly, fourteen semi-structured interviews of British politicians provide a nuanced perspective on how politicians conceptualize Englishness. Interviewee selection was based on three criteria: (i) Members of Parliament (MP) known for discussing English identity politics, (ii) MPs of strongly English identifying constituencies, (iii) MPs with minimal English identity affiliation. Analysis of parliamentary debates reveals the discussion of English representation has gained little momentum. Many politicians fail to comprehend who the English are, why they desire greater representation and believe that increased recognition of the English would disrupt the unity of the UK. These debates highlight the disconnect of parliament from the disenfranchised English towns. A failure to recognize the legitimacy of English identity politics generates an inability for solution-focused debates to occur. Political blogs demonstrate cross-party recognition of growing English disenfranchisement. The dissatisfaction with British politics derives from multiple factors, including economic decline, shifting community structures, and the delay of Brexit. The left-behind communities have seen little response from Westminster, which is often contrasted to the devolved and louder voices of the other UK nations. Many blogs recognize the need for a political response to the English and lament the lack of party-level initiatives. In comparison, interviews depict an array of local-level initiatives reconnecting MPs to community members. Local efforts include town trips to Westminster, multi-cultural cooking classes, and English language courses. These efforts begin to rebuild positive, local narratives, promote engagement across community sectors, and acknowledge the English voices. These interviewees called for large-scale, political action. Meanwhile, several interviewees denied the saliency of English identity. For them, the term held only extremist narratives. The multi-level analysis reveals continued uncertainty on Englishness within British politics, contrasted with increased recognition of its saliency by politicians. It is paramount that politicians increase discussions on English identity politics to avoid increased alienation of English citizens and to rebuild trust in the abilities of Westminster.

Keywords: British politics, contemporary identity politics and its impacts, English identity, English nationalism, identity politics

Procedia PDF Downloads 88
22 Holistic Urban Development: Incorporating Both Global and Local Optimization

Authors: Christoph Opperer

Abstract:

The rapid urbanization of modern societies and the need for sustainable urban development demand innovative solutions that meet both individual and collective needs while addressing environmental concerns. To address these challenges, this paper presents a study that explores the potential of spatial and energetic/ecological optimization to enhance the performance of urban settlements, focusing on both architectural and urban scales. The study focuses on the application of biological principles and self-organization processes in urban planning and design, aiming to achieve a balance between ecological performance, architectural quality, and individual living conditions. The research adopts a case study approach, focusing on a 10-hectare brownfield site in the south of Vienna. The site is surrounded by a small-scale built environment as an appropriate starting point for the research and design process. However, the selected urban form is not a prerequisite for the proposed design methodology, as the findings can be applied to various urban forms and densities. The methodology used in this research involves dividing the overall building mass and program into individual small housing units. A computational model has been developed to optimize the distribution of these units, considering factors such as solar exposure/radiation, views, privacy, proximity to sources of disturbance (such as noise), and minimal internal circulation areas. The model also ensures that existing vegetation and buildings on the site are preserved and incorporated into the optimization and design process. The model allows for simultaneous optimization at two scales, architectural and urban design, which have traditionally been addressed sequentially. This holistic design approach leads to individual and collective benefits, resulting in urban environments that foster a balance between ecology and architectural quality. The results of the optimization process demonstrate a seemingly random distribution of housing units that, in fact, is a densified hybrid between traditional garden settlements and allotment settlements. This urban typology is selected due to its compatibility with the surrounding urban context, although the presented methodology can be extended to other forms of urban development and density levels. The benefits of this approach are threefold. First, it allows for the determination of ideal housing distribution that optimizes solar radiation for each building density level, essentially extending the concept of sustainable building to the urban scale. Second, the method enhances living quality by considering the orientation and positioning of individual functions within each housing unit, achieving optimal views and privacy. Third, the algorithm's flexibility and robustness facilitate the efficient implementation of urban development with various stakeholders, architects, and construction companies without compromising its performance. The core of the research is the application of global and local optimization strategies to create efficient design solutions. By considering both, the performance of individual units and the collective performance of the urban aggregation, we ensure an optimal balance between private and communal benefits. By promoting a holistic understanding of urban ecology and integrating advanced optimization strategies, our methodology offers a sustainable and efficient solution to the challenges of modern urbanization.

Keywords: sustainable development, self-organization, ecological performance, solar radiation and exposure, daylight, visibility, accessibility, spatial distribution, local and global optimization

Procedia PDF Downloads 35
21 Clinico-pathological Study of Xeroderma Pigmentosa: A Case Series of Eight Cases

Authors: Kakali Roy, Sahana P. Raju, Subhra Dhar, Sandipan Dhar

Abstract:

Introduction: Xeroderma pigmentosa (XP) is a rare inherited (autosomal recessive) disease resulting from impairment in DNA repair that involves recognition and repair of ultraviolet radiation (UVR) induced DNA damage in the nucleotide excision repair pathway. Which results in increased photosensitivity, UVR induced damage to skin and eye, increased susceptibility of skin and ocular cancer, and progressive neurodegeneration in some patients. XP is present worldwide, with higher incidence in areas having frequent consanguinity. Being extremely rare, there is limited literature on XP and associated complications. Here, the clinico-pathological experience (spectrum of clinical presentation, histopathological findings of malignant skin lesions, and progression) of managing 8 cases of XP is presented. Methodology: A retrospective study was conducted in a pediatric tertiary care hospital in eastern India during a ten-year period from 2013 to 2022. A clinical diagnosis was made based on severe sun burn or premature photo-aging and/or onset of cutaneous malignancies at early age (1st decade) in background of consanguinity and autosomal recessive inheritance pattern in family. Results: The mean age of presentation was 1.2 years (range of 7month-3years), while three children presented during their infancy. Male to female ratio was 5:3, and all were born of consanguineous marriage. They presented with dermatological manifestations (100%) followed by ophthalmic (75%) and/or neurological symptoms (25%). Patients had normal skin at birth but soon developed extreme sensitivity to UVR in the form of exaggerated sun tanning, burning, and blistering on minimal sun exposure, followed by abnormal skin pigmentation like freckles and lentiginosis. Subsequently, over time there was progressive xerosis, atrophy, wrinkling, and poikiloderma. Six patients had varied degree of ocular involvement, while three of them had severe manifestation, including madarosis, tylosis, ectropion, Lagopthalmos, Pthysis bulbi, clouding and scarring of the cornea with complete or partial loss of vision, and ophthalmic malignancies. 50% (n=4) cases had skin and ocular pre-malignant (actinic keratosis) and malignant lesions, including melanoma and non melanoma skin cancer (NMSC) like squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) in their early childhood. One patient had simultaneous occurrence of multiple malignancies together (SCC, BCC, and melanoma). Subnormal intelligence was noticed as neurological feature, and none had sensory neural hearing loss, microcephaly, neuroregression, or neurdeficit. All the patients had been being managed by a multidisciplinary team of pediatricians, dermatologists, ophthalmologists, neurologists and psychiatrists. Conclusion: Although till date there is no complete cure for XP and the disease is ultimately fatal. But increased awareness, early diagnosis followed by persistent vigorous protection from UVR, and regular screening for early detection of malignancies along with psychological support can drastically improve patients’ quality of life and life expectancy. Further research is required on formulating optimal management of XP, specifically the role and possibilities of gene therapy in XP.

Keywords: childhood malignancies, dermato-pathological findings, eastern India, Xeroderma pigmentosa

Procedia PDF Downloads 48
20 Xen45 Gel Implant in Open Angle Glaucoma: Efficacy, Safety and Predictors of Outcome

Authors: Fossarello Maurizio, Mattana Giorgio, Tatti Filippo.

Abstract:

The most widely performed surgical procedure in Open-Angle Glaucoma (OAG) is trabeculectomy. Although this filtering procedure is extremely effective, surgical failure and postoperative complications are reported. Due to the its invasive nature and possible complications, trabeculectomy is usually reserved, in practice, for patients who are refractory to medical and laser therapy. Recently, a number of micro-invasive surgical techniques (MIGS: Micro-Invasive Glaucoma Surgery), have been introduced in clinical practice. They meet the criteria of micro-incisional approach, minimal tissue damage, short surgical time, reliable IOP reduction, extremely high safety profile and rapid post-operative recovery. Xen45 Gel Implant (Allergan, Dublin, Ireland) is one of the MIGS alternatives, and consists in a porcine gelatin tube designed to create an aqueous flow from the anterior chamber to the subconjunctival space, bypassing the resistance of the trabecular meshwork. In this study we report the results of this technique as a favorable option in the treatment of OAG for its benefits in term of efficacy and safety, either alone or in combination with cataract surgery. This is a retrospective, single-center study conducted in consecutive OAG patients, who underwent Xen45 Gel Stent implantation alone or in combination with phacoemulsification, from October 2018 to June 2019. The primary endpoint of the study was to evaluate the reduction of both IOP and number of antiglaucoma medications at 12 months. The secondary endpoint was to correlate filtering bleb morphology evaluated by means of anterior segment OCT with efficacy in IOP lowering and eventual further procedures requirement. Data were recorded on Microsoft Excel and study analysis was performed using Microsoft Excel and SPSS (IBM). Mean values with standard deviations were calculated for IOPs and number of antiglaucoma medications at all points. Kolmogorov-Smirnov test showed that IOP followed a normal distribution at all time, therefore the paired Student’s T test was used to compare baseline and postoperative mean IOP. Correlation between postoperative Day 1 IOP and Month 12 IOP was evaluated using Pearson coefficient. Thirty-six eyes of 36 patients were evaluated. As compared to baseline, mean IOP and the mean number of antiglaucoma medications significantly decreased from 27,33 ± 7,67 mmHg to 16,3 ± 2,89 mmHg (38,8% reduction) and from 2,64 ± 1,39 to 0,42 ± 0,8 (84% reduction), respectively, at 12 months after surgery (both p < 0,001). According to bleb morphology, eyes were divided in uniform group (n=8, 22,2%), subconjunctival separation group (n=5, 13,9%), microcystic multiform group (n=9, 25%) and multiple internal layer group (n=14, 38,9%). Comparing to baseline, there was no significative difference in IOP between the 4 groups at month 12 follow-up visit. Adverse events included bleb function decrease (n=14, 38,9%), hypotony (n=8, 22,2%) and choroidal detachment (n=2, 5,6%). All eyes presenting bleb flattening underwent needling and MMC injection. The higher percentage of patients that required secondary needling was in the uniform group (75%), with a significant difference between the groups (p=0,03). Xen45 gel stent, either alone or in combination with phacoemulsification, provided a significant lowering in both IOP and medical antiglaucoma treatment and an elevated safety profile.

Keywords: anterior segment OCT, bleb morphology, micro-invasive glaucoma surgery, open angle glaucoma, Xen45 gel implant

Procedia PDF Downloads 101
19 Preliminary Results on a Study of Antimicrobial Susceptibility Testing of Bacillus anthracis Strains Isolated during Anthrax Outbreaks in Italy from 2001 to 2017

Authors: Viviana Manzulli, Luigina Serrecchia, Adelia Donatiello, Valeria Rondinone, Sabine Zange, Alina Tscherne, Antonio Parisi, Antonio Fasanella

Abstract:

Anthrax is a zoonotic disease that affects a wide range of animal species (primarily ruminant herbivores), and can be transmitted to humans through consumption or handling of contaminated animal products. The etiological agent B.anthracis is able to survive in unfavorable environmental conditions by forming endospore which remain viable in the soil for many decades. Furthermore, B.anthracis is considered as one of the most feared agents to be potentially misused as a biological weapon and the importance of the disease and its treatment in humans has been underscored before the bioterrorism events in the United States in 2001. Due to the often fatal outcome of human cases, antimicrobial susceptibility testing plays especially in the management of anthrax infections an important role. In Italy, animal anthrax is endemic (predominantly found in the southern regions and on islands) and is characterized by sporadic outbreaks occurring mainly during summer. Between 2012 and 2017 single human cases of cutaneous anthrax occurred. In this study, 90 diverse strains of B.anthracis, isolated in Italy from 2001 to 2017, were screened to their susceptibility to sixteen clinically relevant antimicrobial agents by using the broth microdilution method. B.anthracis strains selected for this study belong to the strain collection stored at the Anthrax Reference Institute of Italy located inside the Istituto Zooprofilattico Sperimentale of Puglia and Basilicata. The strains were isolated at different time points and places from various matrices (human, animal and environmental). All strains are a representative of over fifty distinct MLVA 31 genotypes. The following antibiotics were used for testing: gentamicin, ceftriaxone, streptomycin, penicillin G, clindamycin, chloramphenicol, vancomycin, linezolid, cefotaxime, tetracycline, erythromycin, rifampin, amoxicillin, ciprofloxacin, doxycycline and trimethoprim. A standard concentration of each antibiotic was prepared in a specific diluent, which were then twofold serial diluted. Therefore, each wells contained: bacterial suspension of 1–5x104 CFU/mL in Mueller-Hinton Broth (MHB), the antibiotic to be tested at known concentration and resazurin, an indicator of cell growth. After incubation overnight at 37°C, the wells were screened for color changes caused by the resazurin: a change from purple to pink/colorless indicated cell growth. The lowest concentration of antibiotic that prevented growth represented the minimal inhibitory concentration (MIC). This study suggests that B.anthracis remains susceptible in vitro to many antibiotics, in addition to doxycycline (MICs ≤ 0,03 µg/ml), ciprofloxacin (MICs ≤ 0,03 µg/ml) and penicillin G (MICs ≤ 0,06 µg/ml), recommend by CDC for the treatment of human cases and for prophylactic use after exposure to the spores. In fact, the good activity of gentamicin (MICs ≤ 0,25 µg/ml), streptomycin (MICs ≤ 1 µg/ml), clindamycin (MICs ≤ 0,125 µg/ml), chloramphenicol(MICs ≤ 4 µg/ml), vancomycin (MICs ≤ 2 µg/ml), linezolid (MICs ≤ 2 µg/ml), tetracycline (MICs ≤ 0,125 µg/ml), erythromycin (MICs ≤ 0,25 µg/ml), rifampin (MICs ≤ 0,25 µg/ml), amoxicillin (MICs ≤ 0,06 µg/ml), towards all tested B.anthracis strains demonstrates an appropriate alternative choice for prophylaxis and/or treatment. All tested B.anthracis strains showed intermediate susceptibility to the cephalosporins (MICs ≥ 16 µg/ml) and resistance to trimethoprim (MICs ≥ 128 µg/ml).

Keywords: Bacillus anthracis, antibiotic susceptibility, treatment, minimum inhibitory concentration

Procedia PDF Downloads 186
18 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters

Authors: Rahil Bahrami, Kaveh Ashenayi

Abstract:

This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.

Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion

Procedia PDF Downloads 46
17 Development of a Mixed-Reality Hands-Free Teleoperated Robotic Arm for Construction Applications

Authors: Damith Tennakoon, Mojgan Jadidi, Seyedreza Razavialavi

Abstract:

With recent advancements of automation in robotics, from self-driving cars to autonomous 4-legged quadrupeds, one industry that has been stagnant is the construction industry. The methodologies used in a modern-day construction site consist of arduous physical labor and the use of heavy machinery, which has not changed over the past few decades. The dangers of a modern-day construction site affect the health and safety of the workers due to performing tasks such as lifting and moving heavy objects and having to maintain unhealthy posture to complete repetitive tasks such as painting, installing drywall, and laying bricks. Further, training for heavy machinery is costly and requires a lot of time due to their complex control inputs. The main focus of this research is using immersive wearable technology and robotic arms to perform the complex and intricate skills of modern-day construction workers while alleviating the physical labor requirements to perform their day-to-day tasks. The methodology consists of mounting a stereo vision camera, the ZED Mini by Stereolabs, onto the end effector of an industrial grade robotic arm, streaming the video feed into the Virtual Reality (VR) Meta Quest 2 (Quest 2) head-mounted display (HMD). Due to the nature of stereo vision, and the similar field-of-views between the stereo camera and the Quest 2, human-vision can be replicated on the HMD. The main advantage this type of camera provides over a traditional monocular camera is it gives the user wearing the HMD a sense of the depth of the camera scene, specifically, a first-person view of the robotic arm’s end effector. Utilizing the built-in cameras of the Quest 2 HMD, open-source hand-tracking libraries from OpenXR can be implemented to track the user’s hands in real-time. A mixed-reality (XR) Unity application can be developed to localize the operator's physical hand motions with the end-effector of the robotic arm. Implementing gesture controls will enable the user to move the robotic arm and control its end-effector by moving the operator’s arm and providing gesture inputs from a distant location. Given that the end effector of the robotic arm is a gripper tool, gripping and opening the operator’s hand will translate to the gripper of the robot arm grabbing or releasing an object. This human-robot interaction approach provides many benefits within the construction industry. First, the operator’s safety will be increased substantially as they can be away from the site-location while still being able perform complex tasks such as moving heavy objects from place to place or performing repetitive tasks such as painting walls and laying bricks. The immersive interface enables precision robotic arm control and requires minimal training and knowledge of robotic arm manipulation, which lowers the cost for operator training. This human-robot interface can be extended to many applications, such as handling nuclear accident/waste cleanup, underwater repairs, deep space missions, and manufacturing and fabrication within factories. Further, the robotic arm can be mounted onto existing mobile robots to provide access to hazardous environments, including power plants, burning buildings, and high-altitude repair sites.

Keywords: construction automation, human-robot interaction, hand-tracking, mixed reality

Procedia PDF Downloads 38
16 Framework to Organize Community-Led Project-Based Learning at a Massive Scale of 900 Indian Villages

Authors: Ayesha Selwyn, Annapoorni Chandrashekar, Kumar Ashwarya, Nishant Baghel

Abstract:

Project-based learning (PBL) activities are typically implemented in technology-enabled schools by highly trained teachers. In rural India, students have limited access to technology and quality education. Implementing typical PBL activities is challenging. This study details how Pratham Education Foundation’s Hybrid Learning model was used to implement two PBL activities related to music in 900 remote Indian villages with 46,000 students aged 10-14. The activities were completed by 69% of groups that submitted a total of 15,000 videos (completed projects). Pratham’s H-Learning model reaches 100,000 students aged 3-14 in 900 Indian villages. The community-driven model engages students in 20,000 self-organized groups outside of school. The students are guided by 6,000 youth volunteers and 100 facilitators. The students partake in learning activities across subjects with the support of community stakeholders and offline digital content on shared Android tablets. A training and implementation toolkit for PBL activities is designed by subject experts. This toolkit is essential in ensuring efficient implementation of activities as facilitators aren’t highly skilled and have limited access to training resources. The toolkit details the activity at three levels of student engagement - enrollment, participation, and completion. The subject experts train project leaders and facilitators who train youth volunteers. Volunteers need to be trained on how to execute the activity and guide students. The training is focused on building the volunteers’ capacity to enable students to solve problems, rather than developing the volunteers’ subject-related knowledge. This structure ensures that continuous intervention of subject matter experts isn’t required, and the onus of judging creativity skills is put on community members. 46,000 students in the H-Learning program were engaged in two PBL activities related to Music from April-June 2019. For one activity, students had to conduct a “musical survey” in their village by designing a survey and shooting and editing a video. This activity aimed to develop students’ information retrieval, data gathering, teamwork, communication, project management, and creativity skills. It also aimed to identify talent and document local folk music. The second activity, “Pratham Idol”, was a singing competition. Students participated in performing, producing, and editing videos. This activity aimed to develop students’ teamwork and creative skills and give students a creative outlet. Students showcased their completed projects at village fairs wherein a panel of community members evaluated the videos. The shortlisted videos from all villages were further evaluated by experts who identified students and adults to participate in advanced music workshops. The H-Learning framework enables students in low resource settings to engage in PBL and develop relevant skills by leveraging community support and using video creation as a tool. In rural India, students do not have access to high-quality education or infrastructure. Therefore designing activities that can be implemented by community members after limited training is essential. The subject experts have minimal intervention once the activity is initiated, which significantly reduces the cost of implementation and allows the activity to be implemented at a massive scale.

Keywords: community supported learning, project-based learning, self-organized learning, education technology

Procedia PDF Downloads 149
15 Experiences of Discrimination and Coping Strategies of Second Generation Academics during the Career-Entry Phase in Austria

Authors: R. Verwiebe, L. Seewann, M. Wolf

Abstract:

This presentation addresses marginalization and discrimination as experienced by young academics with a migrant background in the Austrian labor market. Focusing on second generation academics of Central Eastern European and Turkish descent we explore two major issues. First, we ask whether their career-entry and everyday professional life entails origin-specific barriers. As educational residents, they show competences which, when lacking, tend to be drawn upon to explain discrimination: excellent linguistic skills, accredited high-level training, and networks. Second, we concentrate on how this group reacts to discrimination and overcomes experiences of marginalization. To answer these questions, we utilize recent sociological and social psychological theories that focus on the diversity of individual experiences. This distinguishes us from a long tradition of research that has dealt with the motives that inform discrimination, but has less often considered the effects on those concerned. Similarly, applied coping strategies have less often been investigated, though they may provide unique insights into current problematic issues. Building upon present literature, we follow recent discrimination research incorporating the concepts of ‘multiple discrimination’, ‘subtle discrimination’, and ‘visual social markers’. 21 problem-centered interviews are the empirical foundation underlying this study. The interviewees completed their entire educational career in Austria, graduated in different universities and disciplines and are working in their first post-graduate jobs (career entry phase). In our analysis, we combined thematic charting with a coding method. The results emanating from our empirical material indicated a variety of discrimination experiences ranging from barely perceptible disadvantages to directly articulated and overt marginalization. The spectrum of experiences covered stereotypical suppositions at job interviews, the disavowal of competencies, symbolic or social exclusion by new colleges, restricted professional participation (e.g. customer contact) and non-recruitment due to religious or ethnical markers (e.g. headscarves). In these experiences the role of the academics education level, networks, or competences seemed to be minimal, as negative prejudice on the basis of visible ‘social markers’ operated ‘ex-ante’. The coping strategies identified in overcoming such barriers are: an increased emphasis on effort, avoidance of potentially marginalizing situations, direct resistance (mostly in the form of verbal opposition) and dismissal of negative experiences by ignoring or ironizing the situation. In some cases, the academics drew into their specific competences, such as an intellectual approach of studying specialist literature, focus on their intercultural competences or planning to migrate back to their parent’s country of origin. Our analysis further suggests a distinction between reactive (i.e. to act on and respond to experienced discrimination) and preventative strategies (applied to obviate discrimination) of coping. In light of our results, we would like to stress that the tension between educational and professional success experienced by academics with a migrant background – and the barriers and marginalization they continue to face – are essential issues to be introduced to socio-political discourse. It seems imperative to publicly accentuate the growing social, political and economic significance of this group, their educational aspirations, as well as their experiences of achievement and difficulties.

Keywords: coping strategies, discrimination, labor market, second generation university graduates

Procedia PDF Downloads 194
14 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil

Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes

Abstract:

Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.

Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey

Procedia PDF Downloads 142
13 A Novel Paradigm in the Management of Pancreatic Trauma

Authors: E. Tan, O. McKay, T. Clarnette T., D. Croagh

Abstract:

Background: Historically with pancreatic trauma, complete disruption of the main pancreatic duct (MPD), classified as Grade IV-V by the American Association for the Surgery of Trauma (AAST), necessitated a damage-control laparotomy. This was to avoid mortality, shorten diet upgrade timeframe, and hence shorter length of stay. However, acute pancreatic resection entailed complications of pancreatic fistulas and leaks. With the advance of imaging-guided interventions, non-operative management such as percutaneous and transpapillary drainage of traumatic peripancreatic collections have been trialled favourably. The aim of this case series is to evaluate the efficacy of endoscopic ultrasound-guided (EUS) transmural drainage in managing traumatic peripancreatic collections as a less invasive alternative to traditional approaches. This study also highlights the importance of anatomical knowledge regarding peripancreatic collection’s common location in the lesser sac, the pancreas relationship to adjacent organs, and the formation of the main pancreatic duct in regards to the feasibility of therapeutic internal drainage. Methodology: A retrospective case series was conducted at a single tertiary endoscopy unit, analysing patient data over a 5-year period. Inclusion criteria outlined patients age 5 to 80-years-old, traumatic pancreatic injury of at least Grade IV and haemodynamic stability. Exclusion criteria involved previous episodes of pancreatitis or abdominal trauma. Patient demographics and clinicopathological characteristics were retrospectively collected. Results: The study identified 7 patients with traumatic pancreatic injuries that were managed from 2018-2022; age ranging from 5 to 34 years old, with majority being female (n=5). Majority of the mechanisms of trauma were a handlebar injury (n=4). Diagnosis was confirmed with an elevated lipase and computerized tomotography (CT) confirmation of proximal pancreatic transection with MPD disruption. All patients sustained an isolated single organ grade IV pancreatic injury, except case 4 and 5 with other intra-abdominal visceral Grade 1 injuries. 6 patients underwent early ERCP-guided transpapillary drainage with 1 being unsuccessful for pancreatic duct stent insertion (case 1) and 1 complication of stent migration (case 2). Surveillance imaging post ERCP showed the stents were unable to bridge the disrupted duct and development of symptomatic collections with an average size of 9.9cm. Hence, all patients proceeded to EUS-guided transmural drainage, with 2/7 patients requiring repeat drainages (case 6 and 7). Majority (n=6) had a cystogastrostomy, whilst 1 (case 6) had a cystoenterostomy due to feasibility of the peripancreatic collection being adjacent to duodenum rather than stomach. However, case 6 subsequently required repeat EUS-guided drainage with cystogastrostomy for ongoing collections. Hence all patients avoided initial laparotomy with an average index length of stay of 11.7 days. Successful transmural drainage was demonstrated, with no long-term complications of pancreatic insufficiency; except for 1 patient requiring a distal pancreatectomy at 2 year follow-up due to chronic pain. Conclusion: The early results of this series support EUS-guided transmural drainage as a viable management option for traumatic peripancreatic collections, showcasing successful outcomes, minimal complications, and long-term efficacy in avoiding surgical interventions. More studies are required before the adoption of this procedure as a less invasive and complication-prone management approach for traumatic peripancreatic collections.

Keywords: endoscopic ultrasound, cystogastrostomy, pancreatic trauma, traumatic peripancreatic collection, transmural drainage

Procedia PDF Downloads 15
12 Working at the Interface of Health and Criminal Justice: An Interpretative Phenomenological Analysis Exploration of the Experiences of Liaison and Diversion Nurses – Emerging Findings

Authors: Sithandazile Masuku

Abstract:

Introduction: Public health approaches to offender mental health are driven by international policies and frameworks in response to the disproportionately large representation of people with mental health problems within the offender pathway compared to the general population. Public health service innovations include mental health courts in the US, restorative models in Singapore and, liaison and diversion services in Australia, the UK, and some other European countries. Mental health nurses are at the forefront of offender health service innovations. In the U.K. context, police custody has been identified as an early point within the offender pathway where nurses can improve outcomes by offering assessments and share information with criminal justice partners. This scope of nursing practice has introduced challenges related to skills and support required for nurses working at the interface of health and the criminal justice system. Parallel literature exploring experiences of nurses working in forensic settings suggests the presence of compassion fatigue, burnout and vicarious trauma that may impede risk harm to the nurses in these settings. Published research explores mainly service-level outcomes including monitoring of figures indicative of a reduction in offending behavior. There is minimal research exploring the experiences of liaison and diversion nurses who are situated away from a supportive clinical environment and engaged in complex autonomous decision-making. Aim: This paper will share qualitative findings (in progress) from a PhD study that aims to explore the experiences of liaison and diversion nurses in one service in the U.K. Methodology: This is a qualitative interview study conducted using an Interpretative Phenomenological Analysis to gain an in-depth analysis of lived experiences. Methods: A purposive sampling technique was used to recruit n=8 mental health nurses registered with the UK professional body, Nursing and Midwifery Council, from one UK Liaison and Diversion service. All participants were interviewed online via video call using semi-structured interview topic guide. Data were recorded and transcribed verbatim. Data were analysed using the seven steps of the Interpretative Phenomenological Analysis data analysis method. Emerging Findings Analysis to date has identified pertinent themes: • Difficulties of meaning-making for nurses because of the complexity of their boundary spanning role. • Emotional burden experienced in a highly emotive and fast-changing environment. • Stress and difficulties with role identity impacting on individual nurses’ ability to be resilient. • Challenges to wellbeing related to a sense of isolation when making complex decisions. Conclusion Emerging findings have highlighted the lived experiences of nurses working in liaison and diversion as challenging. The nature of the custody environment has an impact on role identity and decision making. Nurses left feeling isolated and unsupported are less resilient and may go on to experience compassion fatigue. The findings from this study thus far point to a need to connect nurses working in these boundary spanning roles with a supportive infrastructure where the complexity of their role is acknowledged, and they can be connected with a health agenda. In doing this, the nurses would be protected from harm and the likelihood of sustained positive outcomes for service users is optimised.

Keywords: liaison and diversion, nurse experiences, offender health, staff wellbeing

Procedia PDF Downloads 103
11 Delivering Safer Clinical Trials; Using Electronic Healthcare Records (EHR) to Monitor, Detect and Report Adverse Events in Clinical Trials

Authors: Claire Williams

Abstract:

Randomised controlled Trials (RCTs) of efficacy are still perceived as the gold standard for the generation of evidence, and whilst advances in data collection methods are well developed, this progress has not been matched for the reporting of adverse events (AEs). Assessment and reporting of AEs in clinical trials are fraught with human error and inefficiency and are extremely time and resource intensive. Recent research conducted into the quality of reporting of AEs during clinical trials concluded it is substandard and reporting is inconsistent. Investigators commonly send reports to sponsors who are incorrectly categorised and lacking in critical information, which can complicate the detection of valid safety signals. In our presentation, we will describe an electronic data capture system, which has been designed to support clinical trial processes by reducing the resource burden on investigators, improving overall trial efficiencies, and making trials safer for patients. This proprietary technology was developed using expertise proven in the delivery of the world’s first prospective, phase 3b real-world trial, ‘The Salford Lung Study, ’ which enabled robust safety monitoring and reporting processes to be accomplished by the remote monitoring of patients’ EHRs. This technology enables safety alerts that are pre-defined by the protocol to be detected from the data extracted directly from the patients EHR. Based on study-specific criteria, which are created from the standard definition of a serious adverse event (SAE) and the safety profile of the medicinal product, the system alerts the investigator or study team to the safety alert. Each safety alert will require a clinical review by the investigator or delegate; examples of the types of alerts include hospital admission, death, hepatotoxicity, neutropenia, and acute renal failure. This is achieved in near real-time; safety alerts can be reviewed along with any additional information available to determine whether they meet the protocol-defined criteria for reporting or withdrawal. This active surveillance technology helps reduce the resource burden of the more traditional methods of AE detection for the investigators and study teams and can help eliminate reporting bias. Integration of multiple healthcare data sources enables much more complete and accurate safety data to be collected as part of a trial and can also provide an opportunity to evaluate a drug’s safety profile long-term, in post-trial follow-up. By utilising this robust and proven method for safety monitoring and reporting, a much higher risk of patient cohorts can be enrolled into trials, thus promoting inclusivity and diversity. Broadening eligibility criteria and adopting more inclusive recruitment practices in the later stages of drug development will increase the ability to understand the medicinal products risk-benefit profile across the patient population that is likely to use the product in clinical practice. Furthermore, this ground-breaking approach to AE detection not only provides sponsors with better-quality safety data for their products, but it reduces the resource burden on the investigator and study teams. With the data taken directly from the source, trial costs are reduced, with minimal data validation required and near real-time reporting enables safety concerns and signals to be detected more quickly than in a traditional RCT.

Keywords: more comprehensive and accurate safety data, near real-time safety alerts, reduced resource burden, safer trials

Procedia PDF Downloads 50
10 Cellular Mechanisms Involved in the Radiosensitization of Breast- and Lung Cancer Cells by Agents Targeting Microtubule Dynamics

Authors: Elsie M. Nolte, Annie M. Joubert, Roy Lakier, Maryke Etsebeth, Jolene M. Helena, Marcel Verwey, Laurence Lafanechere, Anne E. Theron

Abstract:

Treatment regimens for breast- and lung cancers may include both radiation- and chemotherapy. Ideally, a pharmaceutical agent which selectively sensitizes cancer cells to gamma (γ)-radiation would allow administration of lower doses of each modality, yielding synergistic anti-cancer benefits and lower metastasis occurrence, in addition to decreasing the side-effect profiles. A range of 2-methoxyestradiol (2-ME) analogues, namely 2-ethyl-3-O-sulphamoyl-estra-1,3,5 (10) 15-tetraene-3-ol-17one (ESE-15-one), 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10),15-tetraen-17-ol (ESE-15-ol) and 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10)16-tetraene (ESE-16) were in silico-designed by our laboratory, with the aim of improving the parent compound’s bioavailability in vivo. The main effect of these compounds is the disruption of microtubule dynamics with a resultant mitotic accumulation and induction of programmed cell death in various cancer cell lines. This in vitro study aimed to determine the cellular responses involved in the radiation sensitization effects of these analogues at low doses in breast- and lung cancer cell lines. The oestrogen receptor positive MCF-7-, oestrogen receptor negative MDA-MB-231- and triple negative BT-20 breast cancer cell lines as well as the A549 lung cancer cell line were used. The minimal compound- and radiation doses able to induce apoptosis were determined using annexin-V and cell cycle progression markers. These doses (cell line dependent) were used to pre-sensitize the cancer cells 24 hours prior to 6 gray (Gy) radiation. Experiments were conducted on samples exposed to the individual- as well as the combination treatment conditions in order to determine whether the combination treatment yielded an additive cell death response. Morphological studies included light-, fluorescence- and transmission electron microscopy. Apoptosis induction was determined by flow cytometry employing annexin V, cell cycle analysis, B-cell lymphoma 2 (Bcl-2) signalling, as well as reactive oxygen species (ROS) production. Clonogenic studies were performed by allowing colony formation for 10 days post radiation. Deoxyribonucleic acid (DNA) damage was quantified via γ-H2AX foci and micronuclei quantification. Amplification of the p53 signalling pathway was determined by western blot. Results indicated that exposing breast- and lung cancer cells to nanomolar concentrations of these analogues 24 hours prior to γ-radiation induced more cell death than the compound- and radiation treatments alone. Hypercondensed chromatin, decreased cell density, a damaged cytoskeleton and an increase in apoptotic body formation were observed in cells exposed to the combination treatment condition. An increased number of cells present in the sub-G1 phase as well as increased annexin-V staining, elevation of ROS formation and decreased Bcl-2 signalling confirmed the additive effect of the combination treatment. In addition, colony formation decreased significantly. p53 signalling pathways were significantly amplified in cells exposed to the analogues 24 hours prior to radiation, as was the amount of DNA damage. In conclusion, our results indicated that pre-treatment of breast- and lung cancer cells with low doses of 2-ME analogues sensitized breast- and lung cancer cells to γ-radiation and induced apoptosis more so than the individual treatments alone. Future studies will focus on the effect of the combination treatment on non-malignant cellular counterparts.

Keywords: cancer, microtubule dynamics, radiation therapy, radiosensitization

Procedia PDF Downloads 179