Search results for: maximum curve speed
337 Catalytic Dehydrogenation of Formic Acid into H2/CO2 Gas: A Novel Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of biomass platform, comprising a potential pool of hydrogen energy that stands as a new energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need of in-situ H2 production, which plays a key role in the hydrogenation reactions of biomass into higher value components. It is reported elsewhere in literature that catalytic decomposition of FA is usually performed in poorly designed setup using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. it work suggests an approach that integrates designing a novel catalyst featuring magnetic property with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H2 gas from FA. Using ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under inert medium. Through a novel approach, FA is charged into the reactor via high-pressure positive displacement pump at steady state conditions. The produced gas (H2+CO2) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The novelty of this work lies in designing a very responsive catalyst, pumping consistent amount of FA into a sealed reactor running at steady state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at lower temperature range (35-50°C) yielded more gas while the catalyst loading and Pd doping wt.% were found to be the most significant factors with a P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 50336 Analysis of Superconducting and Optical Properties in Atomic Layer Deposition and Sputtered Thin Films for Next-Generation Single-Photon Detectors
Authors: Nidhi Choudhary, Silke A. Peeters, Ciaran T. Lennon, Dmytro Besprozvannyy, Harm C. M. Knoops, Robert H. Hadfield
Abstract:
Superconducting Nanowire Single Photon Detectors (SNSPDs) have become leading devices in quantum optics and photonics, known for their exceptional efficiency in detecting single photons from ultraviolet to mid-infrared wavelengths with minimal dark counts, low noise, and reduced timing jitter. Recent advancements in materials science focus attention on refractory metal thin films such as NbN and NbTiN to enhance the optical properties and superconducting performance of SNSPDs, opening the way for next-generation detectors. These films have been deposited by several different techniques, such as atomic layer deposition (ALD), plasma pro-advanced plasma processing (ASP) and magnetron sputtering. The fabrication flexibility of these films enables precise control over morphology, crystallinity, stoichiometry and optical properties, which is crucial for optimising the SNSPD performance. Hence, it is imperative to study the optical and superconducting properties of these materials across a wide range of wavelengths. This study provides a comprehensive analysis of the optical and superconducting properties of some important materials in this category (NbN, NbTiN) by different deposition methods. Using Variable angle ellipsometry spectroscopy (VASE), we measured the refractive index, extinction, and absorption coefficient across a wide wavelength range (200-1700 nm) to enhance light confinement for optical communication devices. The critical temperature and sheet resistance were measured using a four-probe method in a custom-built, cryogen-free cooling system with a Sumitomo RDK-101D cold head and CNA-11C compressor. Our results indicate that ALD-deposited NbN shows a higher refractive index and extinction coefficient in the near-infrared region (~1500 nm) than sputtered NbN of the same thickness. Further, the analysis of the optical properties of plasma pro-ASP deposited NbTiN was performed at different substrate bias voltages and different thicknesses. The analysis of substrate bias voltage indicates that the maximum value of the refractive index and extinction coefficient observed for the substrate biasing of 50-80 V across a substrate bias range of (0 V - 150 V). The optical properties of sputtered NbN films are also investigated in terms of the different substrate temperatures during deposition (100 °C-500 °C). We find the higher the substrate temperature during deposition, the higher the value of the refractive index and extinction coefficient has been observed. In all our superconducting thin films ALD-deposited NbN films possess the highest critical temperature (~12 K) compared to sputtered (~8 K) and plasma pro-ASP (~5 K).Keywords: optical communication, thin films, superconductivity, atomic layer deposition (ALD), niobium nitride (NbN), niobium titanium nitride (NbTiN), SNSPD, superconducting detector, photon-counting.
Procedia PDF Downloads 28335 Measuring Digital Literacy in the Chilean Workforce
Authors: Carolina Busco, Daniela Osses
Abstract:
The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.Keywords: digital literacy, digital society, workforce digitalization, digital skills
Procedia PDF Downloads 66334 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 103333 Acoustic Energy Harvesting Using Polyvinylidene Fluoride (PVDF) and PVDF-ZnO Piezoelectric Polymer
Authors: S. M. Giripunje, Mohit Kumar
Abstract:
Acoustic energy that exists in our everyday life and environment have been overlooked as a green energy that can be extracted, generated, and consumed without any significant negative impact to the environment. The harvested energy can be used to enable new technology like wireless sensor networks. Technological developments in the realization of truly autonomous MEMS devices and energy storage systems have made acoustic energy harvesting (AEH) an increasingly viable technology. AEH is the process of converting high and continuous acoustic waves from the environment into electrical energy by using an acoustic transducer or resonator. AEH is not popular as other types of energy harvesting methods since sound waves have lower energy density and such energy can only be harvested in very noisy environment. However, the energy requirements for certain applications are also correspondingly low and also there is a necessity to observe the noise to reduce noise pollution. So the ability to reclaim acoustic energy and store it in a usable electrical form enables a novel means of supplying power to relatively low power devices. A quarter-wavelength straight-tube acoustic resonator as an acoustic energy harvester is introduced with polyvinylidene fluoride (PVDF) and PVDF doped with ZnO nanoparticles, piezoelectric cantilever beams placed inside the resonator. When the resonator is excited by an incident acoustic wave at its first acoustic eigen frequency, an amplified acoustic resonant standing wave is developed inside the resonator. The acoustic pressure gradient of the amplified standing wave then drives the vibration motion of the PVDF piezoelectric beams, generating electricity due to the direct piezoelectric effect. In order to maximize the amount of the harvested energy, each PVDF and PVDF-ZnO piezoelectric beam has been designed to have the same structural eigen frequency as the acoustic eigen frequency of the resonator. With a single PVDF beam placed inside the resonator, the harvested voltage and power become the maximum near the resonator tube open inlet where the largest acoustic pressure gradient vibrates the PVDF beam. As the beam is moved to the resonator tube closed end, the voltage and power gradually decrease due to the decreased acoustic pressure gradient. Multiple piezoelectric beams PVDF and PVDF-ZnO have been placed inside the resonator with two different configurations: the aligned and zigzag configurations. With the zigzag configuration which has the more open path for acoustic air particle motions, the significant increases in the harvested voltage and power have been observed. Due to the interruption of acoustic air particle motion caused by the beams, it is found that placing PVDF beams near the closed tube end is not beneficial. The total output voltage of the piezoelectric beams increases linearly as the incident sound pressure increases. This study therefore reveals that the proposed technique used to harvest sound wave energy has great potential of converting free energy into useful energy.Keywords: acoustic energy, acoustic resonator, energy harvester, eigenfrequency, polyvinylidene fluoride (PVDF)
Procedia PDF Downloads 382332 Pricing Techniques to Mitigate Recurring Congestion on Interstate Facilities Using Dynamic Feedback Assignment
Authors: Hatem Abou-Senna
Abstract:
Interstate 4 (I-4) is a primary east-west transportation corridor between Tampa and Daytona cities, serving commuters, commercial and recreational traffic. I-4 is known to have severe recurring congestion during peak hours. The congestion spans about 11 miles in the evening peak period in the central corridor area as it is considered the only non-tolled limited access facility connecting the Orlando Central Business District (CBD) and the tourist attractions area (Walt Disney World). Florida officials had been skeptical of tolling I-4 prior to the recent legislation, and the public through the media had been complaining about the excessive toll facilities in Central Florida. So, in search for plausible mitigation to the congestion on the I-4 corridor, this research is implemented to evaluate the effectiveness of different toll pricing alternatives that might divert traffic from I-4 to the toll facilities during the peak period. The network is composed of two main diverging limited access highways, freeway (I-4) and toll road (SR 417) in addition to two east-west parallel toll roads SR 408 and SR 528, intersecting the above-mentioned highways from both ends. I-4 and toll road SR 408 are the most frequently used route by commuters. SR-417 is a relatively uncongested toll road with 15 miles longer than I-4 and $5 tolls compared to no monetary cost on 1-4 for the same trip. The results of the calibrated Orlando PARAMICS network showed that percentages of route diversion vary from one route to another and depends primarily on the travel cost between specific origin-destination (O-D) pairs. Most drivers going from Disney (O1) or Lake Buena Vista (O2) to Lake Mary (D1) were found to have a high propensity towards using I-4, even when eliminating tolls and/or providing real-time information. However, a diversion from I-4 to SR 417 for these OD pairs occurred only in the cases of the incident and lane closure on I-4, due to the increase in delay and travel costs, and when information is provided to travelers. Furthermore, drivers that diverted from I-4 to SR 417 and SR 528 did not gain significant travel-time savings. This was attributed to the limited extra capacity of the alternative routes in the peak period and the longer traveling distance. When the remaining origin-destination pairs were analyzed, average travel time savings on I-4 ranged between 10 and 16% amounting to 10 minutes at the most with a 10% increase in the network average speed. High propensity of diversion on the network increased significantly when eliminating tolls on SR 417 and SR 528 while doubling the tolls on SR 408 along with the incident and lane closure scenarios on I-4 and with real-time information provided. The toll roads were found to be a viable alternative to I-4 for these specific OD pairs depending on the user perception of the toll cost which was reflected in their specific travel times. However, on the macroscopic level, it was concluded that route diversion through toll reduction or elimination on surrounding toll roads would only have a minimum impact on reducing I-4 congestion during the peak period.Keywords: congestion pricing, dynamic feedback assignment, microsimulation, paramics, route diversion
Procedia PDF Downloads 178331 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage
Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni
Abstract:
Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage
Procedia PDF Downloads 119330 Religious Capital and Entrepreneurial Behavior in Small Businesses: The Importance of Entrepreneurial Creativity
Authors: Waleed Omri
Abstract:
With the growth of the small business sector in emerging markets, developing a better understanding of what drives 'day-to-day' entrepreneurial activities has become an important issue for academicians and practitioners. Innovation, as an entrepreneurial behavior, revolves around individuals who creatively engage in new organizational efforts. In a similar vein, the innovation behaviors and processes at the organizational member level are central to any corporate entrepreneurship strategy. Despite the broadly acknowledged importance of entrepreneurship and innovation at the individual level in the establishment of successful ventures, the literature lacks evidence on how entrepreneurs can effectively harness their skills and knowledge in the workplace. The existing literature illustrates that religion can impact the day-to-day work behavior of entrepreneurs, managers, and employees. Religious beliefs and practices could affect daily entrepreneurial activities by fostering mental abilities and traits such as creativity, intelligence, and self-efficacy. In the present study, we define religious capital as a set of personal and intangible resources, skills, and competencies that emanate from an individual’s religious values, beliefs, practices, and experiences and may be used to increase the quality of economic activities. Religious beliefs and practices give individuals a religious satisfaction, which can lead them to perform better in the workplace. In addition, religious ethics and practices have been linked to various positive employee outcomes in terms of organizational change, job satisfaction, and entrepreneurial intensity. As investigations of their consequences beyond direct task performance are still scarce, we explore if religious capital plays a role in entrepreneurs’ innovative behavior. In sum, this study explores the determinants of individual entrepreneurial behavior by investigating the relationship between religious capital and entrepreneurs’ innovative behavior in the context of small businesses. To further explain and clarify the religious capital-innovative behavior link, the present study proposes a model to examine the mediating role of entrepreneurial creativity. We use both Islamic work ethics (IWE) and Islamic religious practices (IRP) to measure Islamic religious capital. We use structural equation modeling with a robust maximum likelihood estimation to analyze data gathered from 289 Tunisian small businesses and to explore the relationships among the above-described variables. In line with the theory of planned behavior, only religious work ethics are found to increase the innovative behavior of small businesses’ owner-managers. Our findings also clearly demonstrate that the connection between religious capital-related variables and innovative behavior is better understood if the influence of entrepreneurial creativity, as a mediating variable of the aforementioned relationship, is taken into account. By incorporating both religious capital and entrepreneurial creativity into the innovative behavior analysis, this study provides several important practical implications for promoting innovation process in small businesses.Keywords: entrepreneurial behavior, small business, religion, creativity
Procedia PDF Downloads 242329 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10
Authors: Sofia Papadimitriou
Abstract:
INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score
Procedia PDF Downloads 155328 Influence of Gamma-Radiation Dosimetric Characteristics on the Stability of the Persistent Organic Pollutants
Authors: Tatiana V. Melnikova, Lyudmila P. Polyakova, Alla A. Oudalova
Abstract:
As a result of environmental pollution, the production of agriculture and foodstuffs inevitably contain residual amounts of Persistent Organic Pollutants (POP). The special attention must be given to organic pollutants, including various organochlorinated pesticides (OCP). Among priorities, OCP is DDT (and its metabolite DDE), alfa-HCH, gamma-HCH (lindane). The control of these substances spends proceeding from requirements of sanitary norms and rules. During too time often is lost sight of that the primary product can pass technological processing (in particular irradiation treatment) as a result of which transformation of physicochemical forms of initial polluting substances is possible. The goal of the present work was to study the OCP radiation degradation at a various gamma-radiation dosimetric characteristics. The problems posed for goal achievement: to evaluate the content of the priority of OCPs in food; study the character the degradation of OCP in model solutions (with micro concentrations commensurate with the real content of their agricultural and food products) depending upon dosimetric characteristics of gamma-radiation. Qualitative and quantitative analysis of OCP in food and model solutions by gas chromatograph Varian 3400 (Varian, Inc. (USA)); chromatography-mass spectrometer Varian Saturn 4D (Varian, Inc. (USA)) was carried out. The solutions of DDT, DDE, alpha- and gamma- isomer HCH (0.01, 0.1, 1 ppm) were irradiated on "Issledovatel" (60Co) and "Luch - 1" (60Co) installations at a dose 10 kGy with a variation of dose rate from 0.0083 up to 2.33 kGy/sec. It was established experimentally that OCP residual concentration in individual samples of food products (fish, milk, cereal crops, meat, butter) are evaluated as 10-1-10-4 mg/kg, the value of which depends on the factor-sensations territory and natural migration processes. The results were used in the preparation of model solutions OCP. The dependence of a degradation extent of OCP from a dose rate gamma-irradiation has complex nature. According to our data at a dose 10 kGy, the degradation extent of OCP at first increase passes through a maximum (over the range 0.23 – 0.43 Gy/sec), and then decrease with the magnification of a dose rate. The character of the dependence of a degradation extent of OCP from a dose rate is kept for various OCP, in polar and nonpolar solvents and does not vary at the change of concentration of the initial substance. Also in work conditions of the maximal radiochemical yield of OCP which were observed at having been certain: influence of gamma radiation with a dose 10 kGy, in a range of doses rate 0.23 – 0.43 Gy/sec; concentration initial OCP 1 ppm; use of solvent - 2-propanol after preliminary removal of oxygen. Based on, that at studying model solutions of OCP has been established that the degradation extent of pesticides and qualitative structure of OCP radiolysis products depend on a dose rate, has been decided to continue researches radiochemical transformations OCP into foodstuffs at various of doses rate.Keywords: degradation extent, dosimetric characteristics, gamma-radiation, organochlorinated pesticides, persistent organic pollutants
Procedia PDF Downloads 248327 Pre-Cooling Strategies for the Refueling of Hydrogen Cylinders in Vehicular Transport
Authors: C. Hall, J. Ramos, V. Ramasamy
Abstract:
Hydrocarbon-based fuel vehicles are a major contributor to air pollution due to harmful emissions produced, leading to a demand for cleaner fuel types. A leader in this pursuit is hydrogen, with its application in vehicles producing zero harmful emissions and the only by-product being water. To compete with the performance of conventional vehicles, hydrogen gas must be stored on-board of vehicles in cylinders at high pressures (35–70 MPa) and have a short refueling duration (approximately 3 mins). However, the fast-filling of hydrogen cylinders causes a significant rise in temperature due to the combination of the negative Joule-Thompson effect and the compression of the gas. This can lead to structural failure and therefore, a maximum allowable internal temperature of 85°C has been imposed by the International Standards Organization. The technological solution to tackle the issue of rapid temperature rise during the refueling process is to decrease the temperature of the gas entering the cylinder. Pre-cooling of the gas uses a heat exchanger and requires energy for its operation. Thus, it is imperative to determine the least amount of energy input that is required to lower the gas temperature for cost savings. A validated universal thermodynamic model is used to identify an energy-efficient pre-cooling strategy. The model requires negligible computational time and is applied to previously validated experimental cases to optimize pre-cooling requirements. The pre-cooling characteristics include the location within the refueling timeline and its duration. A constant pressure-ramp rate is imposed to eliminate the effects of rapid changes in mass flow rate. A pre-cooled gas temperature of -40°C is applied, which is the lowest allowable temperature. The heat exchanger is assumed to be ideal with no energy losses. The refueling of the cylinders is modeled with the pre-cooling split in ten percent time intervals. Furthermore, varying burst durations are applied in both the early and late stages of the refueling procedure. The model shows that pre-cooling in the later stages of the refuelling process is more energy-efficient than early pre-cooling. In addition, the efficiency of pre-cooling towards the end of the refueling process is independent of the pressure profile at the inlet. This leads to the hypothesis that pre-cooled gas should be applied as late as possible in the refueling timeline and at very low temperatures. The model had shown a 31% reduction in energy demand whilst achieving the same final gas temperature for a refueling scenario when pre-cooling was applied towards the end of the process. The identification of the most energy-efficient refueling approaches whilst adhering to the safety guidelines is imperative to reducing the operating cost of hydrogen refueling stations. Heat exchangers are energy-intensive and thus, reducing the energy requirement would lead to cost reduction. This investigation shows that pre-cooling should be applied as late as possible and for short durations.Keywords: cylinder, hydrogen, pre-cooling, refueling, thermodynamic model
Procedia PDF Downloads 95326 Risk Factors Associated with Ectoprotozoa Infestation of Wild and Farmed Cyprinids
Authors: M. A. Peribanez, G. Illan, I. De Blas, A. Muniesa, I. Ruiz-Zarzuela
Abstract:
Intensive aquaculture is commonly associated with increased incidence of parasites. However, in Spain, the recent intensification of cyprinid production has not led to knowledge of the parasites that develop in the aquaculture facilities, the factors that affect their development and spread and the transmission between wild and cultivated fish species. The present study focuses on the knowledge of environmental factors, as well as host dependent factors, and their possible influence as risk factors in the incidence and intensity of parasitic infections. This work was conducted in the Duero River Basin, NW Spain. A total of 114 tenches (Tinca tinca) were caught in a fish farm and 667 specimens belonging to six species of cyprinid, not tench, in five rivers. An exhaustive search and microscopic identification of protozoa on skin and gills were carried out. Physical, chemical, and biological parameters of water samples from the capture points were determined. Only two ectoprotozoa were identified, Ichthyophthirius multifiliis and Tripartiella sp. In I. multifiliis, a high intensity of infection (more than 40 parasites on the body surface and more than 80 on gills) was determined in farmed tench (14%) and in Iberian barbel (Luciobarbus bocagei) (91%) and Duero nase (Pseudochondrostoma duriense) (71%) of middle stretches of rivers. The prevalence was similar between farmed tenches and cyprinids of middle courses. Tripartiella sp. was only found in barbels (prevalence in middle stretches, 0.7%) and in farmed tenches (63%), this species resulting in a high risk factor (odds ratio, OR= 1143) in the presence of the ciliate. There were no differences between the two species relative to the intensity of parasitization. Some of the physical, chemical and microbiological water quality parameters appear to be risk factors in the presence of I. multifiliis, with maximum OR of 8. Nevertheless, in Tripartiella sp., the risk is multiplied by 720 when the pH value exceeds 8.4, if we consider the total of the data, and it is increased more than 500 times if we only consider the values recorded in the fish farm (529 by nitrates > 3 mg/l; 530 by total coliforms > 100 CFU/100 ml). However, the high prevalence and risk of infection by I. multifiliis and Tripartiella sp. in fish farms should be related to environmental factors that dependent upon sampling point rather than in direct influence of the physical-chemical and biological parameters of the water. The high pH value recorded in the fish farm (9.62 ± 0.76) is the only parameter that we consider may have a substantial direct influence. Chronic exposure to alkaline pH levels can be a chronic stress generator, predisposing to parasitization by Tripartiella sp. In conclusion, often minor changes in ecosystem conditions, both natural and man-made, can modify the host-parasite relationship, resulting in an increase in the prevalence and intensity of parasitic infections in populations of cyprinids, sometimes causing disease outbreaks.Keywords: cyprinids, fish, parasites, protozoa, risk factors
Procedia PDF Downloads 111325 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 311324 Understanding the Diversity of Antimicrobial Resistance among Wild Animals, Livestock and Associated Environment in a Rural Ecosystem in Sri Lanka
Authors: B. M. Y. I. Basnayake, G. G. T. Nisansala, P. I. J. B. Wijewickrama, U. S. Weerathunga, K. W. M. Y. D. Gunasekara, N. K. Jayasekera, A. W. Kalupahana, R. S. Kalupahana, A. Silva- Fletcher, K. S. A. Kottawatta
Abstract:
Antimicrobial resistance (AMR) has attracted significant attention worldwide as an emerging threat to public health. Understanding the role of livestock and wildlife with the shared environment in the maintenance and transmission of AMR is of utmost importance due to its interactions with humans for combating the issue in one health approach. This study aims to investigate the extent of AMR distribution among wild animals, livestock, and environment cohabiting in a rural ecosystem in Sri Lanka: Hambegamuwa. One square km area at Hambegamuwa was mapped using GPS as the sampling area. The study was conducted for a period of five months from November 2020. Voided fecal samples were collected from 130 wild animals, 123 livestock: buffalo, cattle, chicken, and turkey, with 36 soil and 30 water samples associated with livestock and wildlife. From the samples, Escherichia coli (E. coli) was isolated, and their AMR profiles were investigated for 12 antimicrobials using the disk diffusion method following the CLSI standard. Seventy percent (91/130) of wild animals, 93% (115/123) of livestock, 89% (32/36) of soil, and 63% (19/30) of water samples were positive for E. coli. Maximum of two E. coli from each sample to a total of 467 were tested for the sensitivity of which 157, 208, 62, and 40 were from wild animals, livestock, soil, and water, respectively. The highest resistance in E. coli from livestock (13.9%) and wild animals (13.3%) was for ampicillin, followed by streptomycin. Apart from that, E. coli from livestock and wild animals revealed resistance mainly against tetracycline, cefotaxime, trimethoprim/ sulfamethoxazole, and nalidixic acid at levels less than 10%. Ten cefotaxime resistant E. coli were reported from wild animals, including four elephants, two land monitors, a pigeon, a spotted dove, and a monkey which was a significant finding. E. coli from soil samples reflected resistance primarily against ampicillin, streptomycin, and tetracycline at levels less than in livestock/wildlife. Two water samples had cefotaxime resistant E. coli as the only resistant isolates out of 30 water samples tested. Of the total E. coli isolates, 6.4% (30/467) was multi-drug resistant (MDR) which included 18, 9, and 3 isolates from livestock, wild animals, and soil, respectively. Among 18 livestock MDRs, the highest (13/ 18) was from poultry. Nine wild animal MDRs were from spotted dove, pigeon, land monitor, and elephant. Based on CLSI standard criteria, 60 E. coli isolates, of which 40, 16, and 4 from livestock, wild animal, and environment, respectively, were screened for Extended Spectrum β-Lactamase (ESBL) producers. Despite being a rural ecosystem, AMR and MDR are prevalent even at low levels. E. coli from livestock, wild animals, and the environment reflected a similar spectrum of AMR where ampicillin, streptomycin, tetracycline, and cefotaxime being the predominant antimicrobials of resistance. Wild animals may have acquired AMR via direct contact with livestock or via the environment, as antimicrobials are rarely used in wild animals. A source attribution study including the effects of the natural environment to study AMR can be proposed as this less contaminated rural ecosystem alarms the presence of AMR.Keywords: AMR, Escherichia coli, livestock, wildlife
Procedia PDF Downloads 215323 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography
Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner
Abstract:
Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.Keywords: CBCT, C-arm, reconstruction, trajectory optimization
Procedia PDF Downloads 131322 The Community Stakeholders’ Perspectives on Sexual Health Education for Young Adolescents in Western New York, USA: A Qualitative Descriptive Study
Authors: Sadandaula Rose Muheriwa Matemba, Alexander Glazier, Natalie M. LeBlanc
Abstract:
In the United States, up to 10% of girls and 22 % of boys 10-14 years have had sex, 5% of them had their first sex before 11 years, and the age of first sexual encounter is reported to be 8 years. Over 4,000 adolescent girls, 10-14 years, become pregnant every year, and 2.6% of the abortions in 2019 were among adolescents below 15 years. Despite these negative outcomes, little research has been conducted to understand the sexual health education offered to young adolescents ages 10-14. Early sexual health education is one of the most effective strategies to help lower the rate of early pregnancies, HIV infections, and other sexually transmitted. Such knowledge is necessary to inform best practices for supporting the healthy sexual development of young adolescents and prevent adverse outcomes. This qualitative descriptive study was conducted to explore the community stakeholders’ experiences in sexual health education for young adolescents ages 10-14 and ascertain the young adolescents’ sexual health support needs. Maximum variation purposive sampling was used to recruit a total sample of 13 community stakeholders, including health education teachers, members of youth-based organizations, and Adolescent Clinic providers in Rochester, New York State, in the United States of America from April to June 2022. Data were collected through semi-structured individual in-depth interviews and were analyzed using MAXQDA following a conventional content analysis approach. Triangulation, team analysis, and respondent validation to enhance rigor were also employed to enhance study rigor. The participants were predominantly female (92.3%) and comprised of Caucasians (53.8%), Black/African Americans (38.5%), and Indian-American (7.7%), with ages ranging from 23-59. Four themes emerged: the perceived need for early sexual health education, preferred timing to initiate sexual health conversations, perceived age-appropriate content for young adolescents, and initiating sexual health conversations with young adolescents. The participants described encouraging and concerning experiences. Most participants were concerned that young adolescents are living in a sexually driven environment and are not given the sexual health education they need, even though they are open to learning sexual health materials. There was consensus on the need to initiate sexual health conversations early at 4 years or younger, standardize sexual health education in schools and make age-appropriate sexual health education progressive. These results show that early sexual health education is essential if young adolescents are to delay sexual debut, prevent early pregnancies, and if the goal of ending the HIV epidemic is to be achieved. However, research is needed on a larger scale to understand how best to implement sexual health education among young adolescents and to inform interventions for implementing contextually-relevant sexuality education for this population. These findings call for increased multidisciplinary efforts in promoting early sexual health education for young adolescents.Keywords: community stakeholders’ perspectives, sexual development, sexual health education, young adolescents
Procedia PDF Downloads 77321 Unification of Lactic Acid Bacteria and Aloe Vera for Healthy Gut
Authors: Pavitra Sharma, Anuradha Singh, Nupur Mathur
Abstract:
There exist more than 100 trillion bacteria in the digestive system of human-beings. Such bacteria are referred to as gut microbiota. Gut microbiota comprises around 75% of our immune system. The bacteria that comprise the gut microbiota are unique to every individual and their composition keeps changing with time owing to factors such as the host’s age, diet, genes, environment, and external medication. Of these factors, the variable easiest to control is one’s diet. By modulating one’s diet, one can ensure an optimal composition of the gut microbiota yielding several health benefits. Prebiotics and probiotics are two compounds that have been considered as viable options to modulate the host’s diet. Prebiotics are basically plant products that support the growth of good bacteria in the host’s gut. Examples include garden asparagus, aloe vera etc. Probiotics are living microorganisms that exist in our intestines and play an integral role in promoting digestive health and supporting our immune system in general. Examples include yogurt, kimchi, kombucha etc. In the context of modulating the host’s diet, the key attribute of prebiotics is that they support the growth of probiotics. By developing the right combination of prebiotics and probiotics, food products or supplements can be created to enhance the host’s health. An effective combination of prebiotics and probiotics that yields health benefits to the host is referred to as synbiotics. Synbiotics comprise of an optimal proportion of prebiotics and probiotics, their application benefits the host’s health more than the application of prebiotics and probiotics used in isolation. When applied to food supplements, synbiotics preserve the beneficial probiotic bacteria during storage period and during the bacteria’s passage through the intestinal tract. When applied to the gastrointestinal tract, the composition of the synbiotics assumes paramount importance. Reason being that for synbiotics to be effective in the gastrointestinal tract, the chosen probiotic must be able to survive in the stomach’s acidic environment and manifest tolerance towards bile and pancreatic secretions. Further, not every prebiotic stimulates the growth of a particular probiotic. The prebiotic chosen should be one that not only maintains 2 balance in the host’s digestive system, but also provides the required nutrition to probiotics. Hence in each application of synbiotics, the prebiotic-probiotic combination needs to be carefully selected. Once the combination is finalized, the exact proportion of prebiotics and probiotics to be used needs to be considered. When determining this proportion, only that amount of a prebiotic should be used that activates metabolism of the required number of probiotics. It was observed that while probiotics are active is both the small and large intestine, the effect of prebiotics is observed primarily in the large intestine. Hence in the host’s small intestine, synbiotics are likely to have the maximum efficacy. In small intestine, prebiotics not only assist in the growth of probiotics, but they also enable probiotics to exhibit a higher tolerance to pH levels, oxygenation, and intestinal temperatureKeywords: microbiota, probiotics, prebiotics, synbiotics
Procedia PDF Downloads 134320 Preliminary Design, Production and Characterization of a Coral and Alginate Composite for Bone Engineering
Authors: Sthephanie A. Colmenares, Fabio A. Rojas, Pablo A. Arbeláez, Johann F. Osma, Diana Narvaez
Abstract:
The loss of functional tissue is a ubiquitous and expensive health care problem, with very limited treatment options for these patients. The golden standard for large bone damage is a cadaveric bone as an allograft with stainless steel support; however, this solution only applies to bones with simple morphologies (long bones), has a limited material supply and presents long term problems regarding mechanical strength, integration, differentiation and induction of native bone tissue. Therefore, the fabrication of a scaffold with biological, physical and chemical properties similar to the human bone with a fabrication method for morphology manipulation is the focus of this investigation. Towards this goal, an alginate and coral matrix was created using two production techniques; the coral was chosen because of its chemical composition and the alginate due to its compatibility and mechanical properties. In order to construct the coral alginate scaffold the following methodology was employed; cleaning of the coral, its pulverization, scaffold fabrication and finally the mechanical and biological characterization. The experimental design had: mill method and proportion of alginate and coral, as the two factors, with two and three levels each, using 5 replicates. The coral was cleaned with sodium hypochlorite and hydrogen peroxide in an ultrasonic bath. Then, it was milled with both a horizontal and a ball mill in order to evaluate the morphology of the particles obtained. After this, using a combination of alginate and coral powder and water as a binder, scaffolds of 1cm3 were printed with a SpectrumTM Z510 3D printer. This resulted in solid cubes that were resistant to small compression stress. Then, using a ESQUIM DP-143 silicon mold, constructs used for the mechanical and biological assays were made. An INSTRON 2267® was implemented for the compression tests; the density and porosity were calculated with an analytical balance and the biological tests were performed using cell cultures with VERO fibroblast, and Scanning Electron Microscope (SEM) as visualization tool. The Young’s moduli were dependent of the pulverization method, the proportion of coral and alginate and the interaction between these factors. The maximum value was 5,4MPa for the 50/50 proportion of alginate and horizontally milled coral. The biological assay showed more extracellular matrix in the scaffolds consisting of more alginate and less coral. The density and porosity were proportional to the amount of coral in the powder mix. These results showed that this composite has potential as a biomaterial, but its behavior is elastic with a small Young’s Modulus, which leads to the conclusion that the application may not be for long bones but for tissues similar to cartilage.Keywords: alginate, biomaterial, bone engineering, coral, Porites asteroids, SEM
Procedia PDF Downloads 253319 Hedonic Pricing Model of Parboiled Rice
Authors: Roengchai Tansuchat, Wassanai Wattanutchariya, Aree Wiboonpongse
Abstract:
Parboiled rice is one of the most important food grains and classified in cereal and cereal product. In 2015, parboiled rice was traded more than 14.34 % of total rice trade. The major parboiled rice export countries are Thailand and India, while many countries in Africa and the Middle East such as Nigeria, South Africa, United Arab Emirates, and Saudi Arabia, are parboiled rice import countries. In the global rice market, parboiled rice pricing differs from white rice pricing because parboiled rice is semi-processing product, (soaking, steaming and drying) which affects to their color and texture. Therefore, parboiled rice export pricing does not depend only on the trade volume, length of grain, and percentage of broken rice or purity but also depend on their rice seed attributes such as color, whiteness, consistency of color and whiteness, and their texture. In addition, the parboiled rice price may depend on the country of origin, and other attributes, such as certification mark, label, packaging, and sales locations. The objectives of this paper are to study the attributes of parboiled rice sold in different countries and to evaluate the relationship between parboiled rice price in different countries and their attributes by using hedonic pricing model. These results are useful for product development, and marketing strategies development. The 141 samples of parboiled rice were collected from 5 major parboiled rice consumption countries, namely Nigeria, South Africa, Saudi Arabia, United Arab Emirates and Spain. The physicochemical properties and optical properties, namely size and shape of seed, colour (L*, a*, and b*), parboiled rice texture (hardness, adhesiveness, cohesiveness, springiness, gumminess, and chewiness), nutrition (moisture, protein, carbohydrate, fat, and ash), amylose, package, country of origin, label are considered as explanatory variables. The results from parboiled rice analysis revealed that most of samples are classified as long grain and slender. The highest average whiteness value is the parboiled rice sold in South Africa. The amylose value analysis shows that most of parboiled rice is non-glutinous rice, classified in intermediate amylose content range, and the maximum value was found in United Arab Emirates. The hedonic pricing model showed that size and shape are the key factors to determine parboiled rice price statistically significant. In parts of colour, brightness value (L*) and red-green value (a*) are statistically significant, but the yellow-blue value (b*) is insignificant. In addition, the texture attributes that significantly affect to the parboiled rice price are hardness, adhesiveness, cohesiveness, and gumminess. The findings could help both parboiled rice miller, exporter and retailers formulate better production and marketing strategies by focusing on these attributes.Keywords: hedonic pricing model, optical properties, parboiled rice, physicochemical properties
Procedia PDF Downloads 330318 Association of Body Composition Parameters with Lower Limb Strength and Upper Limb Functional Capacity in Quilombola Remnants
Authors: Leonardo Costa Pereira, Frederico Santos Santana, Mauro Karnikowski, Luís Sinésio Silva Neto, Aline Oliveira Gomes, Marisete Peralta Safons, Margô Gomes De Oliveira Karnikowski
Abstract:
In Brazil, projections of population aging follow all world projections, the birth rate tends to be surpassed by the mortality rate around the year 2045. Historically, the population of Brazilian blacks suffered for several centuries from the oppression of dominant classes. A group, especially of blacks, stands out in relation to territorial, historical and social aspects, and for centuries they have isolated themselves in small communities, in order to maintain their freedom and culture. The isolation of the Quilombola communities generated socioeconomic effects as well as the health of these blacks. Thus, the objective of the present study is to verify the association of body composition parameters with lower and upper limb strength and functional capacity in Quilombola remnants. The research was approved by ethics committee (1,771,159). Anthropometric evaluations of hip and waist circumference, body mass and height were performed. In order to verify the body composition, the relationship between stature and body mass (BM) was performed, generating the body mass index (BMI), as well as the dual-energy X-ray absorptiometry (DEXA) test. The Time Up and Go (TUG) test was used to evaluate the functional capacity, and a maximum repetition test (1MR) for knee extension and handgrip (HG) was applied for strength magnitude analysis. Statistical analysis was performed using the statistical package SPSS 22.0. Shapiro Wilk's normality test was performed. For the possible correlations, the suggestions of the Pearson or Spearman tests were adopted. The results obtained after the interpretation identified that the sample (n = 18) was composed of 66.7% of female individuals with mean age of 66.07 ± 8.95 years. The sample’s body fat percentage (%BF) (35.65 ± 10.73) exceeds the recommendations for age group, as well as the anthropometric parameters of hip (90.91 ± 8.44cm) and waist circumference (80.37 ± 17.5cm). The relationship between height (1.55 ± 0.1m) and body mass (63.44 ± 11.25Kg) generated a BMI of 24.16 ± 7.09Kg/m2, that was considered normal. The TUG performance was 10.71 ± 1.85s. In the 1MR test, 46.67 ± 13.06Kg and in the HG 23.93±7.96Kgf were obtained, respectively. Correlation analyzes were characterized by the high frequency of significant correlations for height, dominant arm mass (DAM), %BF, 1MR and HG variables. In addition, correlations between HG and BM (r = 0.67, p = 0.005), height (r = 0.51, p = 0.004) and DAM (r = 0.55, p = 0.026) were also observed. The strength of the lower limbs correlates with BM (r = 0.69, p = 0.003), height (r = 0.62, p = 0.01) and DAM (r = 0.772, p = 0.001). In this way, we can conclude that not only the simple spatial relationship of mass and height can influence in predictive parameters of strength or functionality, being important the verification of the conditions of the corporal composition. For this population, height seems to be a good predictor of strength and body composition.Keywords: African Continental Ancestry Group, body composition, functional capacity, strength
Procedia PDF Downloads 273317 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent
Authors: Faidon Kyriakou, William Dempster, David Nash
Abstract:
Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.Keywords: AAA, efficiency, finite element analysis, stent deployment
Procedia PDF Downloads 190316 Characteristics of Plasma Synthetic Jet Actuator in Repetitive Working Mode
Authors: Haohua Zong, Marios Kotsonis
Abstract:
Plasma synthetic jet actuator (PSJA) is a new concept of zero net mass flow actuator which utilizes pulsed arc/spark discharge to rapidly pressurize gas in a small cavity under constant-volume conditions. The unique combination of high exit jet velocity (>400 m/s) and high actuation frequency (>5 kHz) provides a promising solution for high-speed high-Reynolds-number flow control. This paper focuses on the performance of PSJA in repetitive working mode which is more relevant to future flow control applications. A two-electrodes PSJA (cavity volume: 424 mm3, orifice diameter: 2 mm) together with a capacitive discharge circuit (discharge energy: 50 mJ-110 mJ) is designed to enable repetitive operation. Time-Resolved Particle Imaging Velocimetry (TR-PIV) system working at 10 kHz is exploited to investigate the influence of discharge frequency on performance of PSJA. In total, seven cases are tested, covering a wide range of discharge frequencies (20 Hz-560 Hz). The pertinent flow features (shock wave, vortex ring and jet) remain the same for single shot mode and repetitive working mode. Shock wave is issued prior to jet eruption. Two distinct vortex rings are formed in one cycle. The first one is produced by the starting jet whereas the second one is related with the shock wave reflection in cavity. A sudden pressure rise is induced at the throat inlet by the reflection of primary shock wave, promoting the shedding of second vortex ring. In one cycle, jet exit velocity first increases sharply, then decreases almost linearly. Afterwards, an alternate occurrence of multiple jet stages and refresh stages is observed. By monitoring the dynamic evolution of exit velocity in one cycle, some integral performance parameters of PSJA can be deduced. As frequency increases, the jet intensity in steady phase decreases monotonically. In the investigated frequency range, jet duration time drops from 250 µs to 210 µs and peak jet velocity decreases from 53 m/s to approximately 39 m/s. The jet impulse and the expelled gas mass (0.69 µN∙s and 0.027 mg at 20 Hz) decline by 48% and 40%, respectively. However, the electro-mechanical efficiency of PSJA defined by the ratio of jet mechanical energy to capacitor energy doesn’t show significant difference (o(0.01%)). Fourier transformation of the temporal exit velocity signal indicates two dominant frequencies. One corresponds to the discharge frequency, while the other accounts for the alternation frequency of jet stage and refresh stage in one cycle. The alternation period (300 µs approximately) is independent of discharge frequency, and possibly determined intrinsically by the actuator geometry. A simple analytical model is established to interpret the alternation of jet stage and refresh stage. Results show that the dynamic response of exit velocity to a small-scale disturbance (jump in cavity pressure) can be treated as a second-order under-damping system. Oscillation frequency of the exit velocity, namely alternation frequency, is positively proportional to exit area, but inversely proportional to cavity volume and throat length. Theoretical value of alternation period (305 µs) agrees well with the experimental value.Keywords: plasma, synthetic jet, actuator, frequency effect
Procedia PDF Downloads 251315 Catalytic Decomposition of Formic Acid into H₂/CO₂ Gas: A Distinct Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of the biomass platform, comprising a potential pool of hydrogen energy that stands as a distinct energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need for in-situ H₂ production, which plays a key role in the hydrogenation reactions of biomass into higher-value components. It is reported elsewhere in the literature that catalytic decomposition of FA is usually performed in poorly designed setups using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. Our work suggests an approach that integrates designing a distinct catalyst featuring magnetic properties with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for the dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H₂ gas from FA. Using an ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under an inert medium. Through a distinct approach, FA is charged into the reactor via a high-pressure positive displacement pump at steady-state conditions. The produced gas (H₂+CO₂) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The uniqueness of this work lies in designing a very responsive catalyst, pumping a consistent amount of FA into a sealed reactor running at steady-state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at a lower temperature range (35-50°C) yielded more gas, while the catalyst loading and Pd doping wt.% were found to be the most significant factors with P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 54314 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 153313 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study
Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu
Abstract:
Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.Keywords: dysphagia, teleassessment, challenges, Indian SLP
Procedia PDF Downloads 135312 Improved Food Security and Alleviation of Cyanide Intoxication through Commercialization and Utilization of Cassava Starch by Tanzania Industries
Authors: Mariam Mtunguja, Henry Laswai, Yasinta Muzanilla, Joseph Ndunguru
Abstract:
Starchy tuberous roots of cassava provide food for people but also find application in various industries. Recently there has been the focus of concentrated research efforts to fully exploit its potential as a sustainable multipurpose crop. High starch yield is the important trait for commercial cassava production for the starch industries. Furthermore, cyanide present in cassava root poses a health challenge in the use of cassava for food. Farming communities where cassava is a staple food, prefer bitter (high cyanogenic) varieties as protection from predators and thieves. As a result, food insecure farmers prefer growing bitter cassava. This has led to cyanide intoxication to this farming communities. Cassava farmers can benefit from marketing cassava to starch producers thereby improving their income and food security. This will decrease dependency on cassava as staple food as a result of increased income and be able to afford other food sources. To achieve this, adequate information is required on the right cassava cultivars and appropriate harvesting period so as to maximize cassava production and profitability. This study aimed at identifying suitable cassava cultivars and optimum time of harvest to maximize starch production. Six commonly grown cultivars were identified and planted in a complete random block design and further analysis was done to assess variation in physicochemical characteristics, starch yield and cyanogenic potentials across three environments. The analysis showed that there is a difference in physicochemical characteristics between landraces (p ≤ 0.05), and can be targeted to different industrial applications. Among landraces, dry matter (30-39%), amylose (11-19%), starch (74-80%) and reducing sugars content (1-3%) varied when expressed on a dry weight basis (p ≤ 0.05); however, only one of the six genotypes differed in crystallinity and mean starch granule particle size, while glucan chain distribution and granule morphology were the same. In contrast, the starch functionality features measured: swelling power, solubility, syneresis, and digestibility differed (p ≤ 0.05). This was supported by Partial least square discriminant analysis (PLS-DA), which highlighted the divergence among the cassavas based on starch functionality, permitting suggestions for the targeted uses of these starches in diverse industries. The study also illustrated genotypic difference in starch yield and cyanogenic potential. Among landraces, Kiroba showed potential for maximum starch yield (12.8 t ha-1) followed by Msenene (12.3 t ha-1) and third was Kilusungu (10.2 t ha-1). The cyanide content of cassava landraces was between 15 and 800 ppm across all trial sites. GGE biplot analysis further confirmed that Kiroba was a superior cultivar in terms of starch yield. Kilusungu had the highest cyanide content and average starch yield, therefore it can also be suitable for use in starch production.Keywords: cyanogen, cassava starch, food security, starch yield
Procedia PDF Downloads 220311 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley
Authors: Sajana Suwal, Ganesh R. Nhemafuki
Abstract:
Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response
Procedia PDF Downloads 289310 Implementation of Ecological and Energy-Efficient Building Concepts
Authors: Robert Wimmer, Soeren Eikemeier, Michael Berger, Anita Preisler
Abstract:
A relatively large percentage of energy and resource consumption occurs in the building sector. This concerns the production of building materials, the construction of buildings and also the energy consumption during the use phase. Therefore, the overall objective of this EU LIFE project “LIFE Cycle Habitation” (LIFE13 ENV/AT/000741) is to demonstrate innovative building concepts that significantly reduce CO₂emissions, mitigate climate change and contain a minimum of grey energy over their entire life cycle. The project is being realised with the contribution of the LIFE financial instrument of the European Union. The ultimate goal is to design and build prototypes for carbon-neutral and “LIFE cycle”-oriented residential buildings and make energy-efficient settlements the standard of tomorrow in line with the EU 2020 objectives. To this end, a resource and energy-efficient building compound is being built in Böheimkirchen, Lower Austria, which includes 6 living units and a community area as well as 2 single family houses with a total usable floor surface of approximately 740 m². Different innovative straw bale construction types (load bearing and pre-fabricated non loadbearing modules) together with a highly innovative energy-supply system, which is based on the maximum use of thermal energy for thermal energy services, are going to be implemented. Therefore only renewable resources and alternative energies are used to generate thermal as well as electrical energy. This includes the use of solar energy for space heating, hot water and household appliances like dishwasher or washing machine, but also a cooking place for the community area operated with thermal oil as heat transfer medium on a higher temperature level. Solar collectors in combination with a biomass cogeneration unit and photovoltaic panels are used to provide thermal and electric energy for the living units according to the seasonal demand. The building concepts are optimised by support of dynamic simulations. A particular focus is on the production and use of modular prefabricated components and building parts made of regionally available, highly energy-efficient, CO₂-storing renewable materials like straw bales. The building components will be produced in collaboration by local SMEs that are organised in an efficient way. The whole building process and results are monitored and prepared for knowledge transfer and dissemination including a trial living in the residential units to test and monitor the energy supply system and to involve stakeholders into evaluation and dissemination of the applied technologies and building concepts. The realised building concepts should then be used as templates for a further modular extension of the settlement in a second phase.Keywords: energy-efficiency, green architecture, renewable resources, sustainable building
Procedia PDF Downloads 148309 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.Keywords: HbA1C, T2DM, SBP, FBS
Procedia PDF Downloads 5308 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison
Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison
Procedia PDF Downloads 161