Search results for: instant transfers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 275

Search results for: instant transfers

35 Bioaccessible Phenolics, Phenolic Bioaccessibility and Antioxidant Activity of Pumpkin Flour

Authors: Emine Aydin, Duygu Gocmen

Abstract:

Pumpkin flour (PF) has a long shelf life and can be used as a nutritive, functional (antioxidant properties, phenolic contents, etc.) and coloring agent in many food items, especially in bakery products, sausages, instant noodles, pasta and flour mixes. Pre-treatment before drying is one of the most important factors affecting the quality of a final powdered product. Pretreatment, such as soaking in a bisulfite solution, provides that total carotenoids in raw materials rich in carotenoids, especially pumpkins, are retained in the dried product. This is due to the beneficial effect of antioxidant additives in the protection of carotenoids in the dehydrated plant foods. The oxygen present in the medium is removed by the radical SO₂, and thus the carotene degradation caused by the molecular oxygen is inhibited by the presence of SO₂. In this study, pumpkin flours (PFs) produced by two different applications (with or without metabisulfite pre-treatment) and then dried in a freeze dryer. The phenolic contents and antioxidant activities of pumpkin flour were determined. In addition to this, the compound of bioavailable phenolic substances which is obtained by PF has also been investigated using in vitro methods. As a result of researches made in recent years, it has been determined that all nutrients taken with foodstuffs are not bioavailable. Bioavailability changes depending on physical properties, chemical compounds, and capacities of individual digestion of foods. Therefore in this study; bioaccessible phenolics and phenolic bioaccessibility were also determined. The phenolics of the samples with metabisulfite application were higher than those of the samples without metabisulfite pre-treatment. Soaking in metabisulfite solution might have a protective effect for phenolic compounds. Phenolics bioaccessibility of pumpkin flours was investigated in order to assess pumpkin flour as sources of accessible phenolics. The higher bioaccessible phenolics (384.19 mg of GAE 100g⁻¹ DW) and phenolic bioaccessibility values (33.65 mL 100 mL⁻¹) were observed in the pumpkin flour with metabisulfite pre-treatment. Metabisulfite application caused an increase in bioaccessible phenolics of pumpkin flour. According to all assay (ABTS, CUPRAC, DPPH, and FRAP) results, both free and bound phenolics of pumpkin flour with metabisulfite pre-treatment had higher antioxidant activity than those of the sample without metabisulfite pre-treatment. The samples subjected to MS pre-treatment exhibited higher antioxidant activities than those of the samples without MS pre-treatment, this possibly due to higher phenolic contents of the samples with metabisulfite applications. As a result, metabisulfite application caused an increase in phenolic contents, bioaccessible phenolics, phenolic bioaccessibility and antioxidant activities of pumpkin flour. It can be said that pumpkin flour can be used as an alternative functional and nutritional ingredient in bakery products, dairy products (yoghurt, ice-cream), soups, sauces, infant formulae, confectionery, etc.

Keywords: pumpkin flour, bioaccessible phenolics, phenolic bioaccessibility, antioxidant activity

Procedia PDF Downloads 320
34 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 87
33 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 271
32 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites

Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan

Abstract:

All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.

Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite

Procedia PDF Downloads 86
31 Chatbots and the Future of Globalization: Implications of Businesses and Consumers

Authors: Shoury Gupta

Abstract:

Chatbots are a rapidly growing technological trend that has revolutionized the way businesses interact with their customers. With the advancements in artificial intelligence, chatbots can now mimic human-like conversations and provide instant and efficient responses to customer inquiries. In this research paper, we aim to explore the implications of chatbots on the future of globalization for both businesses and consumers. The paper begins by providing an overview of the current state of chatbots in the global market and their growth potential in the future. The focus is on how chatbots have become a valuable tool for businesses looking to expand their global reach, especially in areas with high population density and language barriers. With chatbots, businesses can engage with customers in different languages and provide 24/7 customer service support, creating a more accessible and convenient customer experience. The paper then examines the impact of chatbots on cross-cultural communication and how they can help bridge communication gaps between businesses and consumers from different cultural backgrounds. Chatbots can potentially facilitate cross-cultural communication by offering real-time translations, voice recognition, and other innovative features that can help users communicate effectively across different languages and cultures. By providing more accessible and inclusive communication channels, chatbots can help businesses reach new markets and expand their customer base, making them more competitive in the global market. However, the paper also acknowledges that there are potential drawbacks associated with chatbots. For instance, chatbots may not be able to address complex customer inquiries that require human input. Additionally, chatbots may perpetuate biases if they are programmed with certain stereotypes or assumptions about different cultures. These drawbacks may have significant implications for businesses and consumers alike. To explore the implications of chatbots on the future of globalization in greater detail, the paper provides a thorough review of existing literature and case studies. The review covers topics such as the benefits of chatbots for businesses and consumers, the potential drawbacks of chatbots, and how businesses can mitigate any risks associated with chatbot use. The paper also discusses the ethical considerations associated with chatbot use, such as privacy concerns and the need to ensure that chatbots do not discriminate against certain groups of people. The ethical implications of chatbots are particularly important given the potential for chatbots to be used in sensitive areas such as healthcare and financial services. Overall, this research paper provides a comprehensive analysis of chatbots and their implications for the future of globalization. By exploring both the potential benefits and drawbacks of chatbot use, the paper aims to provide insights into how businesses and consumers can leverage this technology to achieve greater global reach and improve cross-cultural communication. Ultimately, the paper concludes that chatbots have the potential to be a powerful tool for businesses looking to expand their global footprint and improve their customer experience, but that care must be taken to mitigate any risks associated with their use.

Keywords: chatbots, conversational AI, globalization, businesses

Procedia PDF Downloads 83
30 Personalized Climate Change Advertising: The Role of Augmented Reality (A.R.) Technology in Encouraging Users for Climate Change Action

Authors: Mokhlisur Rahman

Abstract:

The growing consensus among scientists and world leaders indicates that immediate action should be considered regarding the climate change phenomenon. However, climate change is no more a global issue but a personal one. Thus, individual participation is necessary to address such a significant issue. Studies show that individuals who perceive climate change as a personal issue are more likely to act toward it. This abstract presents augmented reality (A.R.) technology in the social media platform Facebook video advertising. The idea involves creating a video advertisement that enables users to interact with the video by navigating its features and experiencing the result uniquely and engagingly. This advertisement uses A.R. to bring changes, such as people making changes in real-life scenarios by simple clicks on the video and hearing an instant rewarding fact about their choices. The video shows three options: room, lawn, and driveway. Users select one option and engage in interaction based on while holding the camera in their personal spaces: Suppose users select the first option, room, and hold their camera toward spots such as by the windows, balcony, corners, and even walls. In that case, the A.R. offers users different plants appropriate for those unoccupied spaces in the room. Users can change the options of the plants and see which space at their house deserves a plant that makes it more natural. When a user adds a natural element to the video, the video content explains a piece of beneficiary information about how the user contributes to the world more to be livable and why it is necessary. With the help of A.R., if users select the second option, lawn, and hold their camera toward their lawn, the options are various small trees for their lawn to make it more environmentally friendly and decorative. The video plays a beneficiary explanation here too. Suppose users select the third option, driveway, and hold their camera toward their driveway. In that case, the A.R. video option offers unique recycle bin designs using A.I. measurement of spaces. The video plays audio information on anthropogenic contribution to greenhouse gas emission. IoT embeds tracking code in the video ad on Facebook, which stores the exact number of views in the cloud for data analysis. An online survey at the end collects short qualitative answers. This study helps understand the number of users involved and willing to change their behavior; It makes personalized advertising in social media. Considering the current state of climate change, the urgency for action is increasing. This ad increases the chance to make direct connections with individuals and gives a sense of personal responsibility for climate change to act

Keywords: motivations, climate, iot, personalized-advertising, action

Procedia PDF Downloads 66
29 Transient Performance Evaluation and Control Measures for Oum Azza Pumping Station Case Study

Authors: Itissam Abuiziah

Abstract:

This work presents a case study of water-hammer analysis and control for the Oum Azza pumping station project in the coastal area of Rabat to Casablanca from the dam Sidi Mohamed Ben Abdellah (SMBA). This is a typical pumping system with a long penstock and is currently at design and executions stages. Since there is no ideal location for construction of protection devices, the protection devices were provisionally designed to protect the whole conveying pipeline. The simulation results for the transient conditions caused by a sudden pumping stopping without including any protection devices, show that there is a negative beyond 1300m to the station 5725m near the arrival of the reservoir, therefore; there is a need for the protection devices to protect the conveying pipeline. To achieve the goal behind the transient flow analysis which is to protect the conveying pipeline system, four scenarios had been investigated in this case study with two types of protecting devices (pressure relief valve and desurging tank with automatic air control). The four scenarios are conceders as with pressure relief valve, with pressure relief valve and a desurging tank with automatic air control, with pressure relief valve and tow desurging tanks with automatic air control and with pressure relief valve and three desurging tanks with automatic air control. The simulation result for the first scenario shows that overpressure corresponding to an instant pumping stopping is reduced from 263m to 240m, and the minimum hydraulic grad line for the length approximately from station 1300m to station 5725m is still below the pipeline profile which means that the pipe must be equipped with another a protective devices for smoothing depressions. The simulation results for the second scenario show that the minimum and maximum pressures envelopes are decreases especially in the depression phase but not effectively protects the conduct in this case study. The minimum pressure increased from -77.7m for the previous scenario to -65.9m for the current scenario. Therefore the pipeline is still requiring additional protective devices; another desurging tank with automatic air control is installed at station2575.84m. The simulation results for the third scenario show that the minimum and maximum pressures envelopes are decreases but not effectively protects the conduct in this case study since the depression is still exist and varies from -0.6m to– 12m. Therefore the pipeline is still requiring additional protective devices; another desurging tank with automatic air control is installed at station 5670.32 m. Examination of the envelope curves of the minimum pressuresresults for the fourth scenario, we noticed that the piezometric pressure along the pipe remains positive over the entire length of the pipe. We can, therefore, conclude that such scenario can provide effective protection for the pipeline.

Keywords: analysis methods, protection devices, transient flow, water hammer

Procedia PDF Downloads 174
28 The Effect of Social Media Influencer on Boycott Participation through Attitude toward the Offending Country in a Situational Animosity Context

Authors: Hsing-Hua Stella Chang, Mong-Ching Lin, Cher-Min Fong

Abstract:

Using surrogate boycotts as a coercive tactic to force the offending party into changing its approaches has been increasingly significant over the last several decades, and is expected to increase in the future. Research shows that surrogate boycotts are often triggered by controversial international events, and particular foreign countries serve as the offending party in the international marketplace. In other words, multinational corporations are likely to become surrogate boycott targets in overseas markets because of the animosity between their home and host countries. Focusing on the surrogate boycott triggered by a severe situation animosity, this research aims to examine how social media influencers (SMIs) serving as electronic key opinion leaders (EKOLs) in an international crisis facilitate and organize a boycott, and persuade consumers to participate in the boycott. This research suggests that SMIs could be a particularly important information source in a surrogate boycott sparked by a situation of animosity. This research suggests that under such a context, SMIs become a critical information source for individuals to enhance and update their understanding of the event because, unlike traditional media, social media serve as a platform for instant and 24-hour non-stop information access and dissemination. The Xinjiang cotton event was adopted as the research context, which was viewed as an ongoing inter-country conflict, reflecting a crisis, which provokes animosity against the West. Through online panel services, both studies recruited Mainland Chinese nationals to be respondents to the surveys. The findings show that: 1. Social media influencer message is positively related to a negative attitude toward the offending country. 2. Attitude toward the offending country is positively related to boycotting participation. To address the unexplored question – of the effect of social media influencer influence on consumer participation in boycotts, this research presents a finer-grained examination of boycott motivation, with a special focus on a situational animosity context. This research is split into two interrelated parts. In the first part, this research shows that attitudes toward the offending country can be socially constructed by the influence of social media influencers in a situational animosity context. The study results show that consumers perceive different strengths of social pressure related to various levels of influencer messages and thus exhibit different levels of attitude toward the offending country. In the second part, this research further investigates the effect of attitude toward the offending country on boycott participation. The study findings show that such attitude exacerbated the effect of social media influencer messages on boycott participation in a situation of animosity.

Keywords: animosity, social media marketing, boycott, attitude toward the offending country

Procedia PDF Downloads 93
27 MXene Mediated Layered 2D-3D-2D g-C3N4@WO3@Ti3C2 Multijunctional Heterostructure with Enhanced Photoelectrochemical and Photocatalytic Properties

Authors: Lekgowa Collen Makola, Cecil Naphtaly Moro Ouma, Sharon Moeno, Langelihle Dlamini

Abstract:

In recent years, advancement in the field of nanotechnology has evolved new strategies to address energy and environmental issues. Amongst the developing technologies, visible-light-driven photocatalysis is regarded as a sustainable approach for energy production and environmental detoxifications, where transition metal oxides (TMOs) and metal-free carbon-based semiconductors such as graphitic carbon nitride (CN) evidenced notable potential in this matter. Herein, g-C₃N₄@WO₃@Ti₃C₂Tx three-component multijunction photocatalyst was fabricated via facile ultrasonic-assisted self-assembly, followed by calcination to facilitate extensive integrations of the materials. A series of different Ti₃C₂ wt% loading in the g-C₃N4@WO₃@Ti₃C₂Tx were prepared and represented as 1-CWT, 3-CWT, 5-CWT, and 7-CWT corresponding to 1, 3, 5, and 7wt%, respectively. Systematic characterization using spectroscopic and microscopic techniques were employed to validate the successful preparation of the photocatalysts. Enhanced optoelectronic and photoelectrochemical properties were observed for the WO₃@Ti₃C2@g-C₃N4 heterostructure with respect to the individual materials. Photoluminescence spectra and Nyquist plots show restrained recombination rates and improved photocarrier conductivities, respectively, and this was credited to the synergistic coupling effect and the presence of highly conductive Ti₃C2 MXene. The strong interfacial contact surfaces upon the formation of the composite were confirmed using XPS. Multiple charge transfer mechanisms were proposed for the WO3@Ti3C₂@g-C3N4, which couples Z-scheme and Schottky-junction mediated with Ti3C2 MXene. Bode phase plots show improved charge carrier life-times upon the formation of the multijunctional photocatalyst. Moreover, transient photocurrent density of 7-CWT is 40 and seven (7) times higher compared to that of g-C₃N4 and WO3, correspondingly. Unlike in the traditional Z-Scheme, the formed ternary heterostructure possesses interfaces through the metallic 2D Ti₃C₂ MXene, which provided charge transfer channels for efficient photocarrier transfers with carrier concentrations (ND) of 17.49×1021 cm-3 and 4.86% photo-to-chemical conversion efficiency. The as-prepared ternary g-C₃N₄@WO₃@Ti₃C₂Tx exhibited excellent photoelectrochemical properties with reserved redox band potential potencies to facilitate efficient photo-oxidation and -reduction reactions. The fabricated multijunction photocatalyst exhibits potentials to be used in an extensive range of photocatalytic process vis., production of valuable hydrocarbons from CO₂, production of H₂, and degradation of a plethora of pollutants from wastewater.

Keywords: photocatalysis, Z-scheme, multijunction heterostructure, Ti₃C₂ MXene, g-C₃N₄

Procedia PDF Downloads 109
26 AI-Based Information System for Hygiene and Safety Management of Shared Kitchens

Authors: Jongtae Rhee, Sangkwon Han, Seungbin Ji, Junhyeong Park, Byeonghun Kim, Taekyung Kim, Byeonghyeon Jeon, Jiwoo Yang

Abstract:

The shared kitchen is a concept that transfers the value of the sharing economy to the kitchen. It is a type of kitchen equipped with cooking facilities that allows multiple companies or chefs to share time and space and use it jointly. These shared kitchens provide economic benefits and convenience, such as reduced investment costs and rent, but also increase the risk of safety management, such as cross-contamination of food ingredients. Therefore, to manage the safety of food ingredients and finished products in a shared kitchen where several entities jointly use the kitchen and handle various types of food ingredients, it is critical to manage followings: the freshness of food ingredients, user hygiene and safety and cross-contamination of cooking equipment and facilities. In this study, it propose a machine learning-based system for hygiene safety and cross-contamination management, which are highly difficult to manage. User clothing management and user access management, which are most relevant to the hygiene and safety of shared kitchens, are solved through machine learning-based methodology, and cutting board usage management, which is most relevant to cross-contamination management, is implemented as an integrated safety management system based on artificial intelligence. First, to prevent cross-contamination of food ingredients, we use images collected through a real-time camera to determine whether the food ingredients match a given cutting board based on a real-time object detection model, YOLO v7. To manage the hygiene of user clothing, we use a camera-based facial recognition model to recognize the user, and real-time object detection model to determine whether a sanitary hat and mask are worn. In addition, to manage access for users qualified to enter the shared kitchen, we utilize machine learning based signature recognition module. By comparing the pairwise distance between the contract signature and the signature at the time of entrance to the shared kitchen, access permission is determined through a pre-trained signature verification model. These machine learning-based safety management tasks are integrated into a single information system, and each result is managed in an integrated database. Through this, users are warned of safety dangers through the tablet PC installed in the shared kitchen, and managers can track the cause of the sanitary and safety accidents. As a result of system integration analysis, real-time safety management services can be continuously provided by artificial intelligence, and machine learning-based methodologies are used for integrated safety management of shared kitchens that allows dynamic contracts among various users. By solving this problem, we were able to secure the feasibility and safety of the shared kitchen business.

Keywords: artificial intelligence, food safety, information system, safety management, shared kitchen

Procedia PDF Downloads 55
25 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction

Authors: Bruce Wrightsman

Abstract:

Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.

Keywords: wood building systems, material histories, monocoque systems, construction waste

Procedia PDF Downloads 71
24 Assessing Mycotoxin Exposure from Processed Cereal-Based Foods for Children

Authors: Soraia V. M. de Sá, Miguel A. Faria, José O. Fernandes, Sara C. Cunha

Abstract:

Cereals play a vital role in fulfilling the nutritional needs of children, supplying essential nutrients crucial for their growth and development. However, concerns arise due to children's heightened vulnerability due to their unique physiology, specific dietary requirements, and relatively higher intake in relation to their body weight. This vulnerability exposes them to harmful food contaminants, particularly mycotoxins, prevalent in cereals. Because of the thermal stability of mycotoxins, conventional industrial food processing often falls short of eliminating them. Children, especially those aged 4 months to 12 years, frequently encounter mycotoxins through the consumption of specialized food products, such as instant foods, breakfast cereals, bars, cookie snacks, fruit puree, and various dairy items. A close monitoring of this demographic group's exposure to mycotoxins is essential, as toxins ingestion may weaken children’s immune systems, reduce their resistance to infectious diseases, and potentially lead to cognitive impairments. The severe toxicity of mycotoxins, some of which are classified as carcinogenic, has spurred the establishment and ongoing revision of legislative limits on mycotoxin levels in food and feed globally. While EU Commission Regulation 1881/2006 addresses well-known mycotoxins in processed cereal-based foods and infant foods, the absence of regulations specifically addressing emerging mycotoxins underscores a glaring gap in the regulatory framework, necessitating immediate attention. Emerging mycotoxins have gained mounting scrutiny in recent years due to their pervasive presence in various foodstuffs, notably cereals and cereal-based products. Alarmingly, exposure to multiple mycotoxins is hypothesized to exhibit higher toxicity than isolated effects, raising particular concerns for products primarily aimed at children. This study scrutinizes the presence of 22 mycotoxins of the diverse range of chemical classes in 148 processed cereal-based foods, including 39 breakfast cereals, 25 infant formulas, 27 snacks, 25 cereal bars, and 32 cookies commercially available in Portugal. The analytical approach employed a modified QuEChERS procedure followed by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis. Given the paucity of information on the risk assessment of children to multiple mycotoxins in cereal and cereal-based products consumed by children of Portugal pioneers the evaluation of this critical aspect. Overall, aflatoxin B1 (AFB1) and aflatoxin G2 (AFG2) emerged as the most prevalent regulated mycotoxins, while enniatin B (ENNB) and sterigmatocystin (STG) were the most frequently detected emerging mycotoxins.

Keywords: cereal-based products, children´s nutrition, food safety, UPLC-MS/MS analysis

Procedia PDF Downloads 56
23 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver

Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar

Abstract:

Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.

Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy

Procedia PDF Downloads 186
22 Midterm Clinical and Functional Outcomes After Treatment with Ponseti Method for Idiopathic Clubfeet: A Prospective Cohort Study

Authors: Neeraj Vij, Amber Brennan, Jenni Winters, Hadi Salehi, Hamy Temkit, Emily Andrisevic, Mohan V. Belthur

Abstract:

Idiopathic clubfoot is a common lower extremity deformity with an incidence of 1:500. The Ponseti Method is well known as the gold standard of treatment. However, there is limited functional data demonstrating correction of the clubfoot after treatment with the Ponseti method. The purpose of this study was to study the clinical and functional outcomes after the Ponseti method with the Clubfoot Disease-Specific Instrument (CDS) and pedobarography. This IRB-approved prospective study included patients aged 3-18 who were treated for idiopathic clubfoot with the Ponseti method between January 2008 and December 2018. Age-matched controls were identified through siblings of clubfoot patients and other community members. Treatment details were collected through a chart review of the included patients. Laboratory assessment included a physical exam, gait analysis, and pedobarography. The Pediatric Outcomes Data Collection Instrument and the Clubfoot Disease-Specific Instrument were also obtained on clubfoot patients (CF). The Wilcoxson rank-sum test was used to study differences between the CF patients and the typically developing (TD) patients. Statistical significance was set at p < 0.05. There were a total of 37 enrolled patients in our study. 21 were priorly treated for CF and 16 were TD. 94% of the CF patients had bilateral involvement. The age at the start of treatment was 29 days, the average total number of casts was seven to eight, and the average total number of casts after Achilles tenotomy was one. The reoccurrence rate was 25%, tenotomy was required in 94% of patients, and ≥1 tenotomy was required in 25% of patients. There were no significant differences between step length, step width, stride length, force-time integral, maximum peak pressure, foot progression angles, stance phase time, single-limb support time, double limb support time, and gait cycle time between children treated with the Ponseti method and typically developing children. The average post-treatment Pirani and Dimeglio scores were 5.50±0.58 and 15.29±1.58, respectively. The average post-treatment PODCI subscores were: Upper Extremity: 90.28, Transfers: 94.6, Sports: 86.81, Pain: 86.20, Happiness: 89.52, Global: 88.6. The average post-treatment Clubfoot Disease-Specific Instrument scores subscores were: Satisfaction: 73.93, Function: 80.32, Overall: 78.41. The Ponseti Method has a very high success rate and remains to be the gold standard in the treatment of idiopathic clubfoot. Timely management leads to good outcomes and a low need for repeated Achilles tenotomy. Children treated with the Ponseti method demonstrate good functional outcomes as measured through pedobarography. Pedobarography may have clinical utility in studying congenital foot deformities. Objective measures for hours of brace wear could represent an improvement in clubfoot care.

Keywords: functional outcomes, pediatric deformity, patient-reported outcomes, talipes equinovarus

Procedia PDF Downloads 67
21 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 299
20 Delegation or Assignment: Registered Nurses’ Ambiguity in Interpreting Their Scope of Practice in Long Term Care Settings

Authors: D. Mulligan, D. Casey

Abstract:

Introductory Statement: Delegation is when a registered nurse (RN) transfers a task or activity that is normally within their scope of practice to another person (delegatee). RN delegation is common practice with unregistered staff, e.g., student nurses and health care assistants (HCAs). As the role of the HCA is increasingly embedded as a direct care and support role, especially in long-term residential care for older adults, there is RN uncertainty as to their role as a delegator. The assignment is when a task is transferred to a person that is within the role specification of the delegatee. RNs in long-term care (LTC) for older people are increasingly working in teams where there are less RNs and more HCAs providing direct care to the residents. The RN is responsible and accountable for their decision to delegate and assign tasks to HCAs. In an interpretive, multiple case studies to explore how delegation of tasks by RNs to HCAs occurred in long-term care settings in Ireland the importance of the RN understanding their scope of practice emerged. Methodology: Focus group interviews and individual interviews were undertaken as part of a multiple case study. Both cases, anonymized as Case A and Case B, were within the public health service in Ireland. The case study sites were long-term care settings for older adults located in different social care divisions, and in different geographical areas. Four focus group interviews with staff nurses and three individual interviews with CNMs were undertaken. The interactive data analysis approach was the analytical framework used, with within-case and cross-case analysis. The theoretical lens of organizational role theory, applying the role episode model (REM), was used to understand, interpret, and explain the findings. Study Findings: RNs and CNMs understood the role of the nurse regulator and the scope of practice. RNs understood that the RN was accountable for the care and support provided to residents. However, RNs and CNM2s could not describe delegation in the context of their scope of practice. In both cases, the RNs did not have a standardized process for assessing HCA competence to undertake nursing tasks or interventions. RNs did not routinely supervise HCAs. Tasks were assigned and not delegated. There were differences between the cases in relation to understanding which nursing tasks required delegation. HCAs in Case A undertook clinical vital sign assessments and documentation. HCAs in Case B did not routinely undertake these activities. Delegation and assignment were influenced by the organizational factors, e.g., model of care, absence of delegation policies, inadequate RN education on delegation, and a lack of RN and HCA role clarity. Concluding Statement: Nurse staffing levels and skill mix in long-term care settings continue to change with more HCAs providing more direct care and support. With decreasing RN staffing levels RNs will be required to delegate and assign more direct care to HCAs. There is a requirement to distinguish between RN assignment and delegation at policy, regulation, and organizational levels.

Keywords: assignment, delegation, registered nurse, scope of practice

Procedia PDF Downloads 146
19 The Impact of the COVID-19 Pandemic on the Armenian Higher Education System: Challenges аnd Perspectives

Authors: Armine Vahanyan

Abstract:

Humanity has been still coping with the new COVID-19 pandemic. Healthcare providers, economists, psychologists, and other specialists speak about the impact of the virus on different spheres of our life. In the list of similar discussions, the impact of pandemics on global education is of utmost importance. Ideally, providing quality education services should be crucial, and the ways education programs are being adapted will determine the success or failure of the service providers. The paper aims to summarize the research touching upon the current situation of higher education in Armenia. The research includes data from official reports, surveys among education leads, faculty, and students, as well as personal observations and consideration. Through descriptive analysis, the findings of the research are being presented from various aspects. Interim results of the research unveiled two major issues in the sector of higher education in Armenia. On the one hand, the entire compulsory digitization of instruction, assessment, and grading has evoked serious gaps related to the lack of technical competencies. There is an urgent need for professional development programs that will address most of the concerns due to the shift to the online instruction mode. On the other hand, online teaching and learning require revision and adaptation of the existing curricula. Given that the content of certain programs may not be compromised, the teaching methods, the assignments, and evaluation require profound transformation, which will still be in line with course learning outcomes and student learning outcomes. The given paper focuses on the ways the mentioned issues are being addressed in Armenia. The extent of commitment for changes and adaptability to the new situation varies from the government-funded and private universities. In particular, the paper compares and contrasts activities and measures taken at the Armenian State Pedagogical University and the American University of Armenia. Thus, the Pedagogical University focused on the use of Google Classroom as the only means for teaching and learning as well as adopted the compulsory synchronous instruction mode. The American University, on the contrary, kept practicing the academic freedom, enabling both synchronous and asynchronous instruction modes, ensuring alignment of the course learning outcomes and student learning outcomes. The State University utilized the assignments and assessment, which would work for the on-campus instruction mode, while the American university employed a variety of assignments applicable for online teaching mode. The latter has suggested the utilization of multiple apps, internet sources, and online library access for a better online instant. Discussions with faculty through online forums and/or professional development workshops also facilitate restructuring and adaptation of the courses. Finally, the paper will synthesize the results of the undertaken research and will outline the e-learning perspectives and opportunities boosted by the known devastating healthcare issue.

Keywords: assessment, compulsory digitization of education services, online teaching, instruction mode, program restructuring

Procedia PDF Downloads 115
18 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 43
17 Golden Dawn's Rhetoric on Social Networks: Populism, Xenophobia and Antisemitism

Authors: Georgios Samaras

Abstract:

New media such as Facebook, YouTube and Twitter introduced the world to a new era of instant communication. An era where online interactions could replace a lot of offline actions. Technology can create a mediated environment in which participants can communicate (one-to-one, one-to-many, and many-to-many) both synchronously and asynchronously and participate in reciprocal message exchanges. Currently, social networks are attracting similar academic attention to that of the internet after its mainstream implementation into public life. Websites and platforms are seen as the forefront of a new political change. There is a significant backdrop of previous methodologies employed to research the effects of social networks. New approaches are being developed to be able to adapt to the growth of social networks and the invention of new platforms. Golden Dawn was the first openly neo-Nazi party post World War II to win seats in the parliament of a European country. Its racist rhetoric and violent tactics on social networks were rewarded by their supporters, who in the face of Golden Dawn’s leaders saw a ‘new dawn’ in Greek politics. Mainstream media banned its leaders and members of the party indefinitely after Ilias Kasidiaris attacked Liana Kanelli, a member of the Greek Communist Party, on live television. This media ban was seen as a treasonous move by a significant percentage of voters, who believed that the system was desperately trying to censor Golden Dawn to favor mainstream parties. The shocking attack on live television received international coverage and while European countries were condemning this newly emerged neo-Nazi rhetoric, almost 7 percent of the Greek population rewarded Golden Dawn with 18 seats in the Greek parliament. Many seem to think that Golden Dawn mobilised its voters online and this approach played a significant role in spreading their message and appealing to wider audiences. No strict online censorship existed back in 2012 and although Golden Dawn was openly used neo-Nazi symbolism, it was allowed to use social networks without serious restrictions until 2017. This paper used qualitative methods to investigate Golden Dawn’s rise in social networks from 2012 to 2019. The focus of the content analysis was set on three social networking platforms: Facebook, Twitter and YouTube, while the existence of Golden Dawn’s website, which was used as a news sharing hub, was also taken into account. The content analysis included text and visual analyses that sampled content from their social networking pages to translate their political messaging through an ideological lens focused on extreme-right populism. The absence of hate speech regulations on social network platforms in 2012 allowed the free expression of those heavily ultranationalist and populist views, as they were employed by Golden Dawn in the Greek political scene. On YouTube, Facebook and Twitter, the influence of their rhetoric was particularly strong. Official channels and MPs profiles were investigated to explore the messaging in-depth and understand its ideological elements.

Keywords: populism, far-right, social media, Greece, golden dawn

Procedia PDF Downloads 137
16 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 223
15 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 174
14 Behavioral Analysis of Anomalies in Intertemporal Choices Through the Concept of Impatience and Customized Strategies for Four Behavioral Investor Profiles With an Application of the Analytic Hierarchy Process: A Case Study

Authors: Roberta Martino, Viviana Ventre

Abstract:

The Discounted Utility Model is the essential reference for calculating the utility of intertemporal prospects. According to this model, the value assigned to an outcome is the smaller the greater the distance between the moment in which the choice is made and the instant in which the outcome is perceived. This diminution determines the intertemporal preferences of the individual, the psychological significance of which is encapsulated in the discount rate. The classic model provides a discount rate of linear or exponential nature, necessary for temporally consistent preferences. Empirical evidence, however, has proven that individuals apply discount rates with a hyperbolic nature generating the phenomenon of intemporal inconsistency. What this means is that individuals have difficulty managing their money and future. Behavioral finance, which analyzes the investor's attitude through cognitive psychology, has made it possible to understand that beyond individual financial competence, there are factors that condition choices because they alter the decision-making process: behavioral bias. Since such cognitive biases are inevitable, to improve the quality of choices, research has focused on a personalized approach to strategies that combines behavioral finance with personality theory. From the considerations, it emerges the need to find a procedure to construct the personalized strategies that consider the personal characteristics of the client, such as age or gender, and his personality. The work is developed in three parts. The first part discusses and investigates the weight of the degree of impatience and impatience decrease in the anomalies of the discounted utility model. Specifically, the degree of decrease in impatience quantifies the impact that emotional factors generated by haste and financial market agitation have on decision making. The second part considers the relationship between decision making and personality theory. Specifically, four behavioral categories associated with four categories of behavioral investors are considered. This association allows us to interpret intertemporal choice as a combination of bias and temperament. The third part of the paper presents a method for constructing personalized strategies using Analytic Hierarchy Process. Briefly: the first level of the analytic hierarchy process considers the goal of the strategic plan; the second level considers the four temperaments; the third level compares the temperaments with the anomalies of the discounted utility model; and the fourth level contains the different possible alternatives to be selected. The weights of the hierarchy between level 2 and level 3 are constructed considering the degrees of decrease in impatience derived for each temperament with an experimental phase. The results obtained confirm the relationship between temperaments and anomalies through the degree of decrease in impatience and highlight that the actual impact of emotions in decision making. Moreover, it proposes an original and useful way to improve financial advice. Inclusion of additional levels in the Analytic Hierarchy Process can further improve strategic personalization.

Keywords: analytic hierarchy process, behavioral finance anomalies, intertemporal choice, personalized strategies

Procedia PDF Downloads 84
13 Productivity of Grain Sorghum-Cowpea Intercropping System: Climate-Smart Approach

Authors: Mogale T. E., Ayisi K. K., Munjonji L., Kifle Y. G.

Abstract:

Grain sorghum and cowpea are important staple crops in many areas of South Africa, particularly the Limpopo Province. The two crops are produced under a wide range of unsustainable conventional methods, which reduces productivity in the long run. Climate-smart traditional methods such as intercropping can be adopted to ensure sustainable production of these important two crops in the province. A no-tillage field experiment was laid out in a randomised complete block design (RCBD) with four replications over two seasons in two distinct agro-ecological zones, Syferkuil and Ofcolacoin, the province to assess the productivity of sorghum-cowpea intercropped under two cowpea densities.LCi Ultra compact photosynthesis machine was used to collect photosynthetic rate data biweekly between 11h00 and 13h00 until physiological maturity. Biomass and grain yield of the component crops in binary and sole cultures were determined at harvest maturity from middle rows of 2.7 m2 area. The biomass was oven dried in the laboratory at 65oC till constant weight. To obtain grain yield, harvested sorghum heads and cowpea pods were threshed, cleaned, and weighed. Harvest index (HI) and land equivalent ratio (LER) of the two crops were calculated to assess intercrop productivity relative to sole cultures. Data was analysed using the statistical analysis software system (SAS) 9.4 version, followed by mean separation using the least significant difference method. The photosyntheticrate of sorghum-cowpea intercrop was influenced by cowpea density and sorghum cultivar. Photosynthetic rate under low density was higher compared to high density, but this was dependent on the growing conditions. Dry biomass accumulation, grain yield, and harvest index differed among the sorghum cultivars and cowpea in both binary and sole cultures at the two test locations during the 2018/19 and 2020/21 growing seasons. Cowpea grain and dry biomass yields werein excess of 60% under high density compared to low density in both binary and sole cultures. The results revealed that grain yield accumulation of sorghum cultivars was influenced by the density of the companion cowpea crop as well as the production season. For instant, at Syferkuil, Enforcer and Ns5511 accumulated high yield under low density, whereas, at Ofcolaco, the higher yield was recorded under high density. Generally, under low cowpea density, cultivar Enforcer produced relatively higher grain yield whereas, under higher density, Titan yield was superior. The partial and total LER varied with growing season and the treatments studied. The total LERs exceeded 1.0 at the two locations across seasons, ranging from 1.3 to 1.8. From the results, it can be concluded that resources were used more efficiently in sorghum-cowpea intercrop at both Syferkuil and Ofcolaco. Furthermore, intercropping system improved photosynthetic rate, grain yield, and dry matter accumulation of sorghum and cowpea depending on growing conditions and density of cowpea. Hence, the sorghum-cowpea intercropping system can be adopted as a climate-smart practice for sustainable production in the Limpopo province.

Keywords: cowpea, climate-smart, grain sorghum, intercropping

Procedia PDF Downloads 204
12 Investigating the Thermal Comfort Properties of Mohair Fabrics

Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman

Abstract:

Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.

Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance

Procedia PDF Downloads 135
11 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 86
10 Governance of Climate Adaptation Through Artificial Glacier Technology: Lessons Learnt from Leh (Ladakh, India) In North-West Himalaya

Authors: Ishita Singh

Abstract:

Social-dimension of Climate Change is no longer peripheral to Science, Technology and Innovation (STI). Indeed, STI is being mobilized to address small farmers’ vulnerability and adaptation to Climate Change. The experiences from the cold desert of Leh (Ladakh) in North-West Himalaya illustrate the potential of STI to address the challenges of Climate Change and the needs of small farmers through the use of Artificial Glacier Techniques. Small farmers have a unique technique of water harvesting to augment irrigation, called “Artificial Glaciers” - an intricate network of water channels and dams along the upper slope of a valley that are located closer to villages and at lower altitudes than natural glaciers. It starts to melt much earlier and supplements additional irrigation to small farmers’ improving their livelihoods. Therefore, the issue of vulnerability, adaptive capacity and adaptation strategy needs to be analyzed in a local context and the communities as well as regions where people live. Leh (Ladakh) in North-West Himalaya provides a Case Study for exploring the ways in which adaptation to Climate Change is taking place at a community scale using Artificial Glacier Technology. With the above backdrop, an attempt has been made to analyze the rural poor households' vulnerability and adaptation practices to Climate Change using this technology, thereby drawing lessons on vulnerability-livelihood interactions in the cold desert of Leh (Ladakh) in North-West Himalaya, India. The study is based on primary data and information collected from 675 households confined to 27 villages of Leh (Ladakh) in North-West Himalaya, India. It reveals that 61.18% of the population is driving livelihoods from agriculture and allied activities. With increased irrigation potential due to the use of Artificial Glaciers, food security has been assured to 77.56% of households and health vulnerability has been reduced in 31% of households. Seasonal migration as a livelihood diversification mechanism has declined in nearly two-thirds of households, thereby improving livelihood strategies. Use of tactical adaptations by small farmers in response to persistent droughts, such as selling livestock, expanding agriculture lands, and use of relief cash and foods, have declined to 20.44%, 24.74% and 63% of households. However, these measures are unsustainable on a long-term basis. The role of policymakers and societal stakeholders becomes important in this context. To address livelihood challenges, the role of technology is critical in a multidisciplinary approach involving multilateral collaboration among different stakeholders. The presence of social entrepreneurs and new actors on the adaptation scene is necessary to bring forth adaptation measures. Better linkage between Science and Technology policies, together with other policies, should be encouraged. Better health care, access to safe drinking water, better sanitary conditions, and improved standards of education and infrastructure are effective measures to enhance a community’s adaptive capacity. However, social transfers for supporting climate adaptive capacity require significant amounts of additional investment. Developing institutional mechanisms for specific adaptation interventions can be one of the most effective ways of implementing a plan to enhance adaptation and build resilience.

Keywords: climate change, adaptation, livelihood, stakeholders

Procedia PDF Downloads 55
9 A Peg Board with Photo-Reflectors to Detect Peg Insertion and Pull-Out Moments

Authors: Hiroshi Kinoshita, Yasuto Nakanishi, Ryuhei Okuno, Toshio Higashi

Abstract:

Various kinds of pegboards have been developed and used widely in research and clinics of rehabilitation for evaluation and training of patient’s hand function. A common measure in these peg boards is a total time of performance execution assessed by a tester’s stopwatch. Introduction of electrical and automatic measurement technology to the apparatus, on the other hand, has been delayed. The present work introduces the development of a pegboard with an electric sensor to detect moments of individual peg’s insertion and removal. The work also gives fundamental data obtained from a group of healthy young individuals who performed peg transfer tasks using the pegboard developed. Through trails and errors in pilot tests, two 10-hole peg-board boxes installed with a small photo-reflector and a DC amplifier at the bottom of each hole were designed and built by the present authors. The amplified electric analogue signals from the 20 reflectors were automatically digitized at 500 Hz per channel, and stored in a PC. The boxes were set on a test table at different distances (25, 50, 75, and 125 mm) in parallel to examine the effect of hole-to-hole distance. Fifty healthy young volunteers (25 in each gender) as subjects of the study performed successive fast 80 time peg transfers at each distance using their dominant and non-dominant hands. The data gathered showed a clear-cut light interruption/continuation moment by the pegs, allowing accurately (no tester’s error involved) and precisely (an order of milliseconds) to determine the pull out and insertion times of each peg. This further permitted computation of individual peg movement duration (PMD: from peg-lift-off to insertion) apart from hand reaching duration (HRD: from peg insertion to lift-off). An accidental drop of a peg led to an exceptionally long ( < mean + 3 SD) PMD, which was readily detected from an examination of data distribution. The PMD data were commonly right-skewed, suggesting that the median can be a better estimate of individual PMD than the mean. Repeated measures ANOVA using the median values revealed significant hole-to-hole distance, and hand dominance effects, suggesting that these need to be fixed in the accurate evaluation of PMD. The gender effect was non-significant. Performance consistency was also evaluated by the use of quartile variation coefficient values, which revealed no gender, hole-to-hole, and hand dominance effects. The measurement reliability was further examined using interclass correlation obtained from 14 subjects who performed the 25 and 125 mm hole distance tasks at two 7-10 days separate test sessions. Inter-class correlation values between the two tests showed fair reliability for PMD (0.65-0.75), and for HRD (0.77-0.94). We concluded that a sensor peg board developed in the present study could provide accurate (excluding tester’s errors), and precise (at a millisecond rate) time information of peg movement separated from that used for hand movement. It could also easily detect and automatically exclude erroneous execution data from his/her standard data. These would lead to a better evaluation of hand dexterity function compared to the widely used conventional used peg boards.

Keywords: hand, dexterity test, peg movement time, performance consistency

Procedia PDF Downloads 129
8 Complex Decision Rules in Quality Assurance Processes for Quick Service Restaurant Industry: Human Factors Determining Acceptability

Authors: Brandon Takahashi, Marielle Hanley, Gerry Hanley

Abstract:

The large-scale quick-service restaurant industry is a complex business to manage optimally. With over 40 suppliers providing different ingredients for food preparation and thousands of restaurants serving over 50 unique food offerings across a wide range of regions, the company must implement a quality assurance process. Businesses want to deliver quality food efficiently, reliably, and successfully at a low cost that the public wants to buy. They also want to make sure that their food offerings are never unsafe to eat or of poor quality. A good reputation (and profitable business) developed over the years can be gone in an instant if customers fall ill eating your food. Poor quality also results in food waste, and the cost of corrective actions is compounded by the reduction in revenue. Product compliance evaluation assesses if the supplier’s ingredients are within compliance with the specifications of several attributes (physical, chemical, organoleptic) that a company will test to ensure that a quality, safe to eat food is given to the consumer and will deliver the same eating experience in all parts of the country. The technical component of the evaluation includes the chemical and physical tests that produce numerical results that relate to shelf-life, food safety, and organoleptic qualities. The psychological component of the evaluation includes organoleptic, which is acting on or involving the use of the sense organs. The rubric for product compliance evaluation has four levels: (1) Ideal: Meeting or exceeding all technical (physical and chemical), organoleptic, & psychological specifications. (2) Deviation from ideal but no impact on quality: Not meeting or exceeding some technical and organoleptic/psychological specifications without impact on consumer quality and meeting all food safety requirements (3) Acceptable: Not meeting or exceeding some technical and organoleptic/psychological specifications resulting in reduction of consumer quality but not enough to lessen demand and meeting all food safety requirements (4) Unacceptable: Not meeting food safety requirements, independent of meeting technical and organoleptic specifications or meeting all food safety requirements but product quality results in consumer rejection of food offering. Sampling of products and consumer tastings within the distribution network is a second critical element of the quality assurance process and are the data sources for the statistical analyses. Each finding is not independently assessed with the rubric. For example, the chemical data will be used to back up/support any inferences on the sensory profiles of the ingredients. Certain flavor profiles may not be as apparent when mixed with other ingredients, which leads to weighing specifications differentially in the acceptability decision. Quality assurance processes are essential to achieve that balance of quality and profitability by making sure the food is safe and tastes good but identifying and remediating product quality issues before they hit the stores. Comprehensive quality assurance procedures implement human factors methodologies, and this report provides recommendations for systemic application of quality assurance processes for quick service restaurant services. This case study will review the complex decision rubric and evaluate processes to ensure the right balance of cost, quality, and safety is achieved.

Keywords: decision making, food safety, organoleptics, product compliance, quality assurance

Procedia PDF Downloads 180
7 Awareness Creation of Benefits of Antitrypsin-Free Nutraceutical Biopowder for Increasing Human Serum Albumin Synthesis as Possible Adjunct for Management of MDRTB or MDRTB-HIV Patients

Authors: Vincent Oghenekevbe Olughor, Olusoji Mayowa Ige

Abstract:

Except for a preexisting liver disease and malnutrition, there are no predilections for low serum albumin (SA) levels in humans. At normal reference levels (4.0-6.0g/dl) SA is a universal marker for mortality and morbidity risks assessments where depletion by 1.0g/dl increases mortality risk by 137% and morbidity by 89%.It has 40 known functions contributing significantly to the sustenance of human life. A depletion in SA to <2.2g/dl, in most clinical settings worldwide, leads to loss of oncotic pressure of blood causing clinical manifestations of bipedal Oedema, in which the patients remain conscious. SA also contributes significantly to buffering of blood to a life-sustaining pH of 7.35-7.45. A drop in blood pH to <6.9 will lead to instant coma and death, which can occur after SA continues to deplete after manifestations of bipedal Oedema. In an intervention study conducted in 2014 following the discovery that “SA is depleted during malaria fever”, a Nutraceutical formulated for use as treatment adjunct to prevent SA depletions during malaria to <2.4g/dl after Efficacy testing was found to be satisfactory. There are five known types of Malaria caused by Apicomplexan parasites, Plasmodium: the most lethal being that caused by Plasmodium falciparum causing malignant tertian malaria, in which the fever was occurring every 48 hours coincides with the dumping of malaria-toxins (Hemozoin) into blood, causing contamination: blood must remain sterile. Other Apicomplexan parasites, Toxoplasma and Cryptosporidium, are opportunistic infections of HIV. Separate studies showed SA depletions in MDRTB (multidrug resistant TB), and MDRTB-HIV patients by the same mechanism discovered with malaria and such depletions will be further complicated whenever Apicomplexan parasitic infections co-exist. Both Apicomplexan parasites and the TB parasite belong to the Obligate-group of Parasites, which are parasites that replicate only inside its host; and most of them have capacities to over-consume host nutrients during parasitaemia. In MDRTB patients the body attempts repeatedly to prevent depletions in SA to critical levels in the presence of adequate nutrients and only for a while in MDRTB-HIV patients. These groups of patients will, therefore, benefit from the already tested Nutraceutical in malaria patients. The Nutraceutical bio-Powder was formulated (to BP 1988 specification) from twelve nature-based food-grade nutrients containing all dedicated nutrients for ensuring improved synthesis of Albumin by the liver. The Nutraceutical was administered daily for 38±2days in 23 children, in a prospective phase-2 clinical trial, and its impact on body weight and core blood parameters were documented at the start and end of efficacy testing period. Sixteen children who did not experience malaria-induced depletions of SA had significant SA increase; seven children who experienced malaria-induced depletions of SA had insignificant SA decrease. The Packed Cell Volume Percentage (PCV %), a measure of the Oxygen carrying capacity of blood and the amount of nutrients the body can absorb, increased in both groups. The total serum proteins (SA+ Globulins) increased or decreased within the continuum of normal. In conclusion, MDRTB and MDRTB-HIV patients will benefit from a variant of this Nutraceutical when used as treatment adjunct.

Keywords: antitrypsin-free Nutraceutical, apicomplexan parasites, no predilections for low serum albumin, toxoplasmosis

Procedia PDF Downloads 281
6 Navigating Rapids And Collecting Medical Insights: A Data Collection Of Athletes Presenting To The Medical Team At The International Canoe Federation Canoe Slalom World Championships 2023

Authors: Grace Scaplehorn, Muhammad Adeel Akhtar, Jane Gibson

Abstract:

Background: Canoe Slalom entails the skilful navigation of a carbon composite canoe or kayak through a series of 18-25 hanging gates, strategically positioned along the course, either upstream or downstream, amidst currents of whitewater rapids in natural and man-made river settings. Athletes compete individually in timed trials, competing for the fastest course time, typically around 80 to 120 seconds. In the new discipline of Kayak Cross, descents of the course are initiated by groups of four athletes freefalling simultaneously from a starting platform situated 3m above the river. Kayak Cross athletes, in contrast to Canoe Slalom, can make physical contact with suspended gates without incurring time penalties and are required to perform a kayak roll half way down the course. The Canoe Slalom World Championships were held at Lee Valley Whitewater Centre, London, from 19th to 24th September 2023. The event comprised 299 international athletes competing for 10 World Championship titles in Canoe/Kayak Slalom events (Olympic Debut Munich 1972), and the new Kayak Cross discipline (Olympic Debut Paris 2024). The inaugural appearance of Kayak Cross at the World Championships occurred in 2017, in Pau, France. There is limited literature surrounding Kayak Cross and the incidence of athlete injuries compared to traditional Canoe Slalom, hence it was felt important to undertake this review to address the perception that the event is dangerous. Aim: The study aimed to quantify and collate data collected from athletes presenting to the event medical centre. Methods: Athletes’ details were collected at initial assessments from the start of the practice period (16th–18th September) and throughout the event. Demographics such as age, sex and nationality were recorded along with presenting complaints, treatment, medication administered and outcome. Specifically, injuries were then sub-classified into body regions. The data does not include athletes who sought medical attention from their own governing body’s medical team. Results: During the 8-day period, there were 11 individual presentations to the medical centre, 3.7% of the athlete population (n=299). The mean age was 23.9 years (n=7), 6 were male (n=10). The most common presentation was minor injury (n=9), with 6 being musculoskeletal and 3 comprising skin damage, followed by insect sting/allergy (n=1) and pain relief requests (n=1). Five presentations were event-related, all being musculoskeletal injuries; 2 shoulder/arm, 1 head/neck, 1 hand/wrist and 1 other (data was not recorded). Of these injuries, the only intervention was 2 cases of 400mg Ibuprofen, which was given to both shoulder/arm injuries. Four of the 11 presentations were pre-existing injuries, which had been exacerbated due to increased intensity of practice. Two patients were advised to return for review, with 100% compliance. There were no unplanned re-presentations, and no emergency transfers to secondary care. Both the Kayak Cross and Canoe Slalom competitions resulted in 1 new event-related athlete presentation each. Conclusion: The event resulted in a negligible incidence of presentations at the medical centre, for both Kayak Cross and Canoe Slalom. This data holds significance in informing risk assessments and medical protocols necessary for the organisation of canoe slalom events.

Keywords: canoe slalom, kayak cross, athlete injuries, event injuries

Procedia PDF Downloads 47