Search results for: modeling accuracy
122 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki
Abstract:
Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol
Procedia PDF Downloads 229121 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 125120 Enhancing Seismic Resilience in Urban Environments
Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino
Abstract:
Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability
Procedia PDF Downloads 69119 Quantifying Impairments in Whiplash-Associated Disorders and Association with Patient-Reported Outcomes
Authors: Harpa Ragnarsdóttir, Magnús Kjartan Gíslason, Kristín Briem, Guðný Lilja Oddsdóttir
Abstract:
Introduction: Whiplash-Associated Disorder (WAD) is a health problem characterized by motor, neurological and psychosocial symptoms, stressing the need for a multimodal treatment approach. To achieve individualized multimodal approach, prognostic factors need to be identified early using validated patient-reported and objective outcome measures. The aim of this study is to demonstrate the degree of association between patient-reported and clinical outcome measures of WAD patients in the subacute phase. Methods: Individuals (n=41) with subacute (≥1, ≤3 months) WAD (I-II), medium to high-risk symptoms, or neck pain rating ≥ 4/10 on the Visual Analog Scale (VAS) were examined. Outcome measures included measurements for movement control (Butterfly test) and cervical active range of motion (cAROM) using the NeckSmart system, a computer system using an inertial measurement unit (IMU) that connects to a computer. The IMU sensor is placed on the participant’s head, who receives visual feedback about the movement of the head. Patient-reported neck disability, pain intensity, general health, self-perceived handicap, central sensitization, and difficulties due to dizziness were measured using questionnaires. Excel and R statistical software were used for statistical analyses. Results: Forty-one participants, 15 males (37%), 26 females (63%), mean (SD) age 36.8 (±12.7), underwent data collection. Mean amplitude accuracy (AA) (SD) in the Butterfly test for easy, medium, and difficult paths were 2.4mm (0.9), 4.4mm (1.8), and 6.8mm (2.7), respectively. Mean cAROM (SD) for flexion, extension, left-, and right rotation were 46.3° (18.5), 48.8° (17.8), 58.2° (14.3), and 58.9° (15.0), respectively. Mean scores on the Neck Disability Index (NDI), VAS, Dizziness Handicap Inventory (DHI), Central Sensitization Inventory (CSI), and 36-Item Short Form Survey RAND version (RAND) were 43% (17.4), 7 (1.7), 37 (25.4), 51 (17.5), and 39.2 (17.7) respectively. Females showed significantly greater deviation for AA compared to males for easy and medium Butterfly paths (p<0.05). Statistically significant moderate to strong positive correlation was found between the DHI and easy (r=0.6, p=0.05), medium (r=0.5, p=0.05)) and difficult (r=0.5, p<0.05) Butterfly paths, between the total RAND score and all cAROMs (r between 0.4-0.7, p≤0.05) except flexion (r=0.4, p=0.7), and between the NDI score and CSI (r=0.7, p<0.01), VAS (r=0.5, p<0.01), and DHI (r=0.7, p<0.01) scores respectively. Discussion: All patient-reported and objective measures were found to be outside the reference range. Results suggest females have worse movement control in the neck in the subacute WAD phase. However, no statistical difference based on gender was found in patient-reported measures. Suggesting females might have worse movement control than males in general in this phase. The correlation found between DHI and the Butterfly test can be explained because the DHI measures proprioceptive symptoms like dizziness and eye movement disorders that can affect the outcome of movement control tests. A correlation was found between the total RAND score and cAROM, suggesting that a reduced range of motion affects the quality of life. Significance: The NeckSmart system can detect abnormalities in cAROM, fine movement control, and kinesthesia of the neck. Results suggest females have worse movement control than males. Results show a moderate to a high correlation between several patient-reported and objective measurements.Keywords: whiplash associated disorders, car-collision, neck, trauma, subacute
Procedia PDF Downloads 70118 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials
Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita
Abstract:
Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides
Procedia PDF Downloads 228117 Discriminant Shooting-Related Statistics between Winners and Losers 2023 FIBA U19 Basketball World Cup
Authors: Navid Ebrahmi Madiseh, Sina Esfandiarpour-Broujeni, Rahil Razeghi
Abstract:
Introduction: Quantitative analysis of game-related statistical parameters is widely used to evaluate basketball performance at both individual and team levels. Non-free throw shooting plays a crucial role as the primary scoring method, holding significant importance in the game's technical aspect. It has been explored the predictive value of game-related statistics in relation to various contextual and situational variables. Many similarities and differences also have been found between different age groups and levels of competition. For instance, in the World Basketball Championships after the 2010 rule change, 2-point field goals distinguished winners from losers in women's games but not in men's games, and the impact of successful 3-point field goals on women's games was minimal. The study aimed to identify and compare discriminant shooting-related statistics between winning and losing teams in men’s and women’s FIBA-U19-Basketball-World-Cup-2023 tournaments. Method: Data from 112 observations (2 per game) of 16 teams (for each gender) in the FIBA-U19-Basketball-World-Cup-2023 were selected as samples. The data were obtained from the official FIBA website using Python. Specific information was extracted, organized into a DataFrame, and consisted of twelve variables, including shooting percentages, attempts, and scoring ratio for 3-pointers, mid-range shots, paint shots, and free throws. Made% = scoring type successful attempts/scoring type total attempts¬ (1)Free-throw-pts% (free throw score ratio) = (free throw score/total score) ×100 (2)Mid-pts% (mid-range score ratio) = (mid-range score/total score) ×100 (3) Paint-pts% (paint score ratio) = (Paint score/total score) ×100 (4) 3p_pts% (three-point score ratio) = (three-point score/total score) ×100 (5) Independent t-tests were used to examine significant differences in shooting-related statistical parameters between winning and losing teams for both genders. Statistical significance was p < 0.05. All statistical analyses were completed with SPSS, Version 18. Results: The results showed that 3p-made%, mid-pts%, paint-made%, paint-pts%, mid-attempts, and paint-attempts were significantly different between winners and losers in men (t=-3.465, P<0.05; t=3.681, P<0.05; t=-5.884, P<0.05; t=-3.007, P<0.05; t=2.549, p<0.05; t=-3.921, P<0.05). For women, significant differences between winners and losers were found for 3p-made%, 3p-pts%, paint-made%, and paint-attempt (t=-6.429, P<0.05; t=-1.993, P<0.05; t=-1.993, P<0.05; t=-4.115, P<0.05; t=02.451, P<0.05). Discussion: The research aimed to compare shooting-related statistics between winners and losers in men's and women's teams at the FIBA-U19-Basketball-World-Cup-2023. Results indicated that men's winners excelled in 3p-made%, paint-made%, paint-pts%, paint-attempts, and mid-attempt, consistent with previous studies. This study found that losers in men’s teams had higher mid-pts% than winners, which was inconsistent with previous findings. It has been indicated that winners tend to prioritize statistically efficient shots while forcing the opponent to take mid-range shots. In women's games, significant differences in 3p-made%, 3p-pts%, paint-made%, and paint-attempts were observed, indicating that winners relied on riskier outside scoring strategies. Overall, winners exhibited higher accuracy in paint and 3P shooting than losers, but they also relied more on outside offensive strategies. Additionally, winners acquired a higher ratio of their points from 3P shots, which demonstrates their confidence in their skills and willingness to take risks at this competitive level.Keywords: gender, losers, shoot-statistic, U19, winners
Procedia PDF Downloads 100116 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology
Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey
Abstract:
In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography
Procedia PDF Downloads 85115 Modeling Discrimination against Gay People: Predictors of Homophobic Behavior against Gay Men among High School Students in Switzerland
Authors: Patrick Weber, Daniel Gredig
Abstract:
Background and Purpose: Research has well documented the impact of discrimination and micro-aggressions on the wellbeing of gay men and, especially, adolescents. For the prevention of homophobic behavior against gay adolescents, however, the focus has to shift on those who discriminate: For the design and tailoring of prevention and intervention, it is important to understand the factors responsible for homophobic behavior such as, for example, verbal abuse. Against this background, the present study aimed to assess homophobic – in terms of verbally abusive – behavior against gay people among high school students. Furthermore, it aimed to establish the predictors of the reported behavior by testing an explanatory model. This model posits that homophobic behavior is determined by negative attitudes and knowledge. These variables are supposed to be predicted by the acceptance of traditional gender roles, religiosity, orientation toward social dominance, contact with gay men, and by the perceived expectations of parents, friends and teachers. These social-cognitive variables in turn are assumed to be determined by students’ gender, age, immigration background, formal school level, and the discussion of gay issues in class. Method: From August to October 2016, we visited 58 high school classes in 22 public schools in a county in Switzerland, and asked the 8th and 9th year students on three formal school levels to participate in survey about gender and gay issues. For data collection, we used an anonymous self-administered questionnaire filled in during class. Data were analyzed using descriptive statistics and structural equation modelling (Generalized Least Square Estimates method). The sample included 897 students, 334 in the 8th and 563 in the 9th year, aged 12–17, 51.2% being female, 48.8% male, 50.3% with immigration background. Results: A proportion of 85.4% participants reported having made homophobic statements in the 12 month before survey, 4.7% often and very often. Analysis showed that respondents’ homophobic behavior was predicted directly by negative attitudes (β=0.20), as well as by the acceptance of traditional gender roles (β=0.06), religiosity (β=–0.07), contact with gay people (β=0.10), expectations of parents (β=–0.14) and friends (β=–0.19), gender (β=–0.22) and having a South-East-European or Western- and Middle-Asian immigration background (β=0.09). These variables were predicted, in turn, by gender, age, immigration background, formal school level, and discussion of gay issues in class (GFI=0.995, AGFI=0.979, SRMR=0.0169, CMIN/df=1.199, p>0.213, adj. R2 =0.384). Conclusion: Findings evidence a high prevalence of homophobic behavior in the responding high school students. The tested explanatory model explained 38.4% of the assessed homophobic behavior. However, data did not found full support of the model. Knowledge did not turn out to be a predictor of behavior. Except for the perceived expectation of teachers and orientation toward social dominance, the social-cognitive variables were not fully mediated by attitudes. Equally, gender and immigration background predicted homophobic behavior directly. These findings demonstrate the importance of prevention and provide also leverage points for interventions against anti-gay bias in adolescents – also in social work settings as, for example, in school social work, open youth work or foster care.Keywords: discrimination, high school students, gay men, predictors, Switzerland
Procedia PDF Downloads 330114 Improving the Uptake of Community-Based Multidrug-Resistant Tuberculosis Treatment Model in Nigeria
Authors: A. Abubakar, A. Parsa, S. Walker
Abstract:
Despite advances made in the diagnosis and management of drug-sensitive tuberculosis (TB) over the past decades, treatment of multidrug-resistant tuberculosis (MDR-TB) remains challenging and complex particularly in high burden countries including Nigeria. Treatment of MDR-TB is cost-prohibitive with success rate generally lower compared to drug-sensitive TB and if care is not taken it may become the dominant form of TB in future with many treatment uncertainties and substantial morbidity and mortality. Addressing these challenges requires collaborative efforts thorough sustained researches to evaluate the current treatment guidelines, particularly in high burden countries and prevent progression of resistance. To our best knowledge, there has been no research exploring the acceptability, effectiveness, and cost-effectiveness of community-based-MDR-TB treatment model in Nigeria, which is among the high burden countries. The previous similar qualitative study looks at the home-based management of MDR-TB in rural Uganda. This research aimed to explore patient’s views and acceptability of community-based-MDR-TB treatment model and to evaluate and compare the effectiveness and cost-effectiveness of community-based versus hospital-based MDR-TB treatment model of care from the Nigerian perspective. Knowledge of patient’s views and acceptability of community-based-MDR-TB treatment approach would help in designing future treatment recommendations and in health policymaking. Accordingly, knowledge of effectiveness and cost-effectiveness are part of the evidence needed to inform a decision about whether and how to scale up MDR-TB treatment, particularly in a poor resource setting with limited knowledge of TB. Mixed methods using qualitative and quantitative approach were employed. Qualitative data were obtained using in-depth semi-structured interviews with 21 MDR-TB patients in Nigeria to explore their views and acceptability of community-based MDR-TB treatment model. Qualitative data collection followed an iterative process which allowed adaptation of topic guides until data saturation. In-depth interviews were analyzed using thematic analysis. Quantitative data on treatment outcomes were obtained from medical records of MDR-TB patients to determine the effectiveness and direct and indirect costs were obtained from the patients using validated questionnaire and health system costs from the donor agencies to determine the cost-effectiveness difference between community and hospital-based model from the Nigerian perspective. Findings: Some themes have emerged from the patient’s perspectives indicating preference and high acceptability of community-based-MDR-TB treatment model by the patients and mixed feelings about the risk of MDR-TB transmission within the community due to poor infection control. The result of the modeling from the quantitative data is still on course. Community-based MDR-TB care was seen as the acceptable and most preferred model of care by the majority of the participants because of its convenience which in turn enhanced recovery, enables social interaction and offer more psychosocial benefits as well as averted productivity loss. However, there is a need to strengthen this model of care thorough enhanced strategies that ensure guidelines compliance and infection control in order to prevent the progression of resistance and curtail community transmission.Keywords: acceptability, cost-effectiveness, multidrug-resistant TB treatment, community and hospital approach
Procedia PDF Downloads 123113 Audience Members' Perspective-Taking Predicts Accurate Identification of Musically Expressed Emotion in a Live Improvised Jazz Performance
Authors: Omer Leshem, Michael F. Schober
Abstract:
This paper introduces a new method for assessing how audience members and performers feel and think during live concerts, and how audience members' recognized and felt emotions are related. Two hypotheses were tested in a live concert setting: (1) that audience members’ cognitive perspective taking ability predicts their accuracy in identifying an emotion that a jazz improviser intended to express during a performance, and (2) that audience members' affective empathy predicts their likelihood of feeling the same emotions as the performer. The aim was to stage a concert with audience members who regularly attend live jazz performances, and to measure their cognitive and affective reactions during the performance as non-intrusively as possible. Pianist and Grammy nominee Andy Milne agreed, without knowing details of the method or hypotheses, to perform a full-length solo improvised concert that would include an ‘unusual’ piece. Jazz fans were recruited through typical advertising for New York City jazz performances. The event was held at the New School’s Glass Box Theater, the home of leading NYC jazz venue ‘The Stone.’ Audience members were charged typical NYC jazz club admission prices; advertisements informed them that anyone who chose to participate in the study would be reimbursed their ticket price after the concert. The concert, held in April 2018, had 30 attendees, 23 of whom participated in the study. Twenty-two minutes into the concert, the performer was handed a paper note with the instruction: ‘Perform a 3-5-minute improvised piece with the intention of conveying sadness.’ (Sadness was chosen based on previous music cognition lab studies, where solo listeners were less likely to select sadness as the musically-expressed emotion accurately from a list of basic emotions, and more likely to misinterpret sadness as tenderness). Then, audience members and the performer were invited to respond to a questionnaire from a first envelope under their seat. Participants used their own words to describe the emotion the performer had intended to express, and then to select the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s differential emotions scale. The concert then continued as usual. At the end, participants answered demographic questions and Davis’ interpersonal reactivity index (IRI), a 28-item scale designed to assess both cognitive and affective empathy. Hypothesis 1 was supported: audience members with greater cognitive empathy were more likely to accurately identify sadness as the expressed emotion. Moreover, audience members who accurately selected ‘sadness’ reported feeling marginally sadder than people who did not select sadness. Hypotheses 2 was not supported; audience members with greater affective empathy were not more likely to feel the same emotions as the performer. If anything, members with lower cognitive perspective-taking ability had marginally greater emotional overlap with the performer, which makes sense given that these participants were less likely to identify the music as sad, which corresponded with the performer’s actual feelings. Results replicate findings from solo lab studies in a concert setting and demonstrate the viability of exploring empathy and collective cognition in improvised live performance.Keywords: audience, cognition, collective cognition, emotion, empathy, expressed emotion, felt emotion, improvisation, live performance, recognized emotion
Procedia PDF Downloads 133112 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS
Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert
Abstract:
The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF
Procedia PDF Downloads 163111 Numerical Modeling of Timber Structures under Varying Humidity Conditions
Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan
Abstract:
Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber
Procedia PDF Downloads 246110 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing
Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl
Abstract:
This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization
Procedia PDF Downloads 159109 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 137108 Defective Autophagy Disturbs Neural Migration and Network Activity in hiPSC-Derived Cockayne Syndrome B Disease Models
Authors: Julia Kapr, Andrea Rossi, Haribaskar Ramachandran, Marius Pollet, Ilka Egger, Selina Dangeleit, Katharina Koch, Jean Krutmann, Ellen Fritsche
Abstract:
It is widely acknowledged that animal models do not always represent human disease. Especially human brain development is difficult to model in animals due to a variety of structural and functional species-specificities. This causes significant discrepancies between predicted and apparent drug efficacies in clinical trials and their subsequent failure. Emerging alternatives based on 3D in vitro approaches, such as human brain spheres or organoids, may in the future reduce and ultimately replace animal models. Here, we present a human induced pluripotent stem cell (hiPSC)-based 3D neural in a vitro disease model for the Cockayne Syndrome B (CSB). CSB is a rare hereditary disease and is accompanied by severe neurologic defects, such as microcephaly, ataxia and intellectual disability, with currently no treatment options. Therefore, the aim of this study is to investigate the molecular and cellular defects found in neural hiPSC-derived CSB models. Understanding the underlying pathology of CSB enables the development of treatment options. The two CSB models used in this study comprise a patient-derived hiPSC line and its isogenic control as well as a CSB-deficient cell line based on a healthy hiPSC line (IMR90-4) background thereby excluding genetic background-related effects. Neurally induced and differentiated brain sphere cultures were characterized via RNA Sequencing, western blot (WB), immunocytochemistry (ICC) and multielectrode arrays (MEAs). CSB-deficiency leads to an altered gene expression of markers for autophagy, focal adhesion and neural network formation. Cell migration was significantly reduced and electrical activity was significantly increased in the disease cell lines. These data hint that the cellular pathologies is possibly underlying CSB. By induction of autophagy, the migration phenotype could be partially rescued, suggesting a crucial role of disturbed autophagy in defective neural migration of the disease lines. Altered autophagy may also lead to inefficient mitophagy. Accordingly, disease cell lines were shown to have a lower mitochondrial base activity and a higher susceptibility to mitochondrial stress induced by rotenone. Since mitochondria play an important role in neurotransmitter cycling, we suggest that defective mitochondria may lead to altered electrical activity in the disease cell lines. Failure to clear the defective mitochondria by mitophagy and thus missing initiation cues for new mitochondrial production could potentiate this problem. With our data, we aim at establishing a disease adverse outcome pathway (AOP), thereby adding to the in-depth understanding of this multi-faced disorder and subsequently contributing to alternative drug development.Keywords: autophagy, disease modeling, in vitro, pluripotent stem cells
Procedia PDF Downloads 120107 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 70106 Modeling the Present Economic and Social Alienation of Working Class in South Africa in the Musical Production ‘from Marikana to Mahagonny’ at Durban University of Technology (DUT)
Authors: Pamela Tancsik
Abstract:
The stage production in 2018, titled ‘From‘Marikana to Mahagonny’, began with a prologue in the form of the award-winning documentary ‘Miners Shot Down' by Rehad Desai, followed by Brecht/Weill’s song play or scenic cantata ‘Mahagonny’, premièred in Baden-Baden 1927. The central directorial concept of the DUT musical production ‘From Marikana to Mahagonny’ was to show a connection between the socio-political alienation of mineworkers in present-day South Africa and Brecht’s alienation effect in his scenic cantata ‘Mahagonny’. Marikana is a mining town about 50 km west of South Africa’s capital Pretoria. Mahagonny is a fantasy name for a utopian mining town in the United States. The characters, setting, and lyrics refer to America with of songs like ‘Benares’ and ‘Moon of Alabama’ and the use of typical American inventions such as dollars, saloons, and the telephone. The six singing characters in ‘Mahagonny’ all have typical American names: Charlie, Billy, Bobby, Jimmy, and the two girls they meet later are called Jessie and Bessie. The four men set off to seek Mahagonny. For them, it is the ultimate dream destination promising the fulfilment of all their desires, such as girls, alcohol, and dollars – in short, materialistic goals. Instead of finding a paradise, they experience how money and the practice of exploitive capitalism, and the lack of any moral and humanity is destroying their lives. In the end, Mahagonny gets demolished by a hurricane, an event which happened in 1926 in the United States. ‘God’ in person arrives disillusioned and bitter, complaining about violent and immoral mankind. In the end, he sends them all to hell. Charlie, Billy, Bobby, and Jimmy reply that this punishment does not mean anything to them because they have already been in hell for a long time – hell on earth is a reality, so the threat of hell after life is meaningless. Human life was also taken during the stand-off between striking mineworkers and the South African police on 16 August 2012. Miners from the Lonmin Platinum Mine went on an illegal strike, equipped with bush knives and spears. They were striking because their living conditions had never improved; they still lived in muddy shacks with no running water and electricity. Wages were as low as R4,000 (South African Rands), equivalent to just over 200 Euro per month. By August 2012, the negotiations between Lonmin management and the mineworkers’ unions, asking for a minimum wage of R12,500 per month, had failed. Police were sent in by the Government, and when the miners did not withdraw, the police shot at them. 34 were killed, some by bullets in their backs while running away and trying to hide behind rocks. In the musical play ‘From Marikana to Mahagonny’ audiences in South Africa are confronted with a documentary about Marikana, followed by Brecht/Weill’s scenic cantata, highlighting the tragic parallels between the Mahagonny story and characters from 1927 America and the Lonmin workers today in South Africa, showing that in 95 years, capitalism has not changed.Keywords: alienation, brecht/Weill, mahagonny, marikana/South Africa, musical theatre
Procedia PDF Downloads 98105 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction
Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman
Abstract:
Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation
Procedia PDF Downloads 92104 An Autonomous Passive Acoustic System for Detection, Tracking and Classification of Motorboats in Portofino Sea
Authors: A. Casale, J. Alessi, C. N. Bianchi, G. Bozzini, M. Brunoldi, V. Cappanera, P. Corvisiero, G. Fanciulli, D. Grosso, N. Magnoli, A. Mandich, C. Melchiorre, C. Morri, P. Povero, N. Stasi, M. Taiuti, G. Viano, M. Wurtz
Abstract:
This work describes a real-time algorithm for detecting, tracking and classifying single motorboats, developed using the acoustic data recorded by a hydrophone array within the framework of EU LIFE + project ARION (LIFE09NAT/IT/000190). The project aims to improve the conservation status of bottlenose dolphins through a real-time simultaneous monitoring of their population and surface ship traffic. A Passive Acoustic Monitoring (PAM) system is installed on two autonomous permanent marine buoys, located close to the boundaries of the Marine Protected Area (MPA) of Portofino (Ligurian Sea- Italy). Detecting surface ships is also a necessity in many other sensible areas, such as wind farms, oil platforms, and harbours. A PAM system could be an effective alternative to the usual monitoring systems, as radar or active sonar, for localizing unauthorized ship presence or illegal activities, with the advantage of not revealing its presence. Each ARION buoy consists of a particular type of structure, named meda elastica (elastic beacon) composed of a main pole, about 30-meter length, emerging for 7 meters, anchored to a mooring of 30 tons at 90 m depth by an anti-twist steel wire. Each buoy is equipped with a floating element and a hydrophone tetrahedron array, whose raw data are send via a Wi-Fi bridge to a ground station where real-time analysis is performed. Bottlenose dolphin detection algorithm and ship monitoring algorithm are operating in parallel and in real time. Three modules were developed and commissioned for ship monitoring. The first is the detection algorithm, based on Time Difference Of Arrival (TDOA) measurements, i.e., the evaluation of angular direction of the target respect to each buoy and the triangulation for obtaining the target position. The second is the tracking algorithm, based on a Kalman filter, i.e., the estimate of the real course and speed of the target through a predictor filter. At last, the classification algorithm is based on the DEMON method, i.e., the extraction of the acoustic signature of single vessels. The following results were obtained; the detection algorithm succeeded in evaluating the bearing angle with respect to each buoy and the position of the target, with an uncertainty of 2 degrees and a maximum range of 2.5 km. The tracking algorithm succeeded in reconstructing the real vessel courses and estimating the speed with an accuracy of 20% respect to the Automatic Identification System (AIS) signals. The classification algorithm succeeded in isolating the acoustic signature of single vessels, demonstrating its temporal stability and the consistency of both buoys results. As reference, the results were compared with the Hilbert transform of single channel signals. The algorithm for tracking multiple targets is ready to be developed, thanks to the modularity of the single ship algorithm: the classification module will enumerate and identify all targets present in the study area; for each of them, the detection module and the tracking module will be applied to monitor their course.Keywords: acoustic-noise, bottlenose-dolphin, hydrophone, motorboat
Procedia PDF Downloads 174103 Establishing Feedback Partnerships in Higher Education: A Discussion of Conceptual Framework and Implementation Strategies
Authors: Jessica To
Abstract:
Feedback is one of the powerful levers for enhancing students’ performance. However, some students are under-engaged with feedback because they lack responsibility for feedback uptake. To resolve this conundrum, recent literature proposes feedback partnerships in which students and teachers share the power and responsibilities to co-construct feedback. During feedback co-construction, students express feedback needs to teachers, and teachers respond to individuals’ needs in return. Though this approach can increase students’ feedback ownership, its application is lagging as the field lacks conceptual clarity and implementation guide. This presentation aims to discuss the conceptual framework of feedback partnerships and feedback co-construction strategies. It identifies the components of feedback partnerships and strategies which could facilitate feedback co-construction. A systematic literature review was conducted to answer the questions. The literature search was performed using ERIC, PsycINFO, and Google Scholar with the keywords “assessment partnership”, “student as partner,” and “feedback engagement”. No time limit was set for the search. The inclusion criteria encompassed (i) student-teacher partnerships in feedback, (ii) feedback engagement in higher education, (iii) peer-reviewed publications, and (iv) English as the language of publication. Those without addressing conceptual understanding and implementation strategies were excluded. Finally, 65 publications were identified and analysed using thematic analysis. For the procedure, the texts relating to the questions were first extracted. Then, codes were assigned to summarise the ideas of the texts. Upon subsuming similar codes into themes, four themes emerged: students’ responsibilities, teachers’ responsibilities, conditions for partnerships development, and strategies. Their interrelationships were examined iteratively for framework development. Establishing feedback partnerships required different responsibilities of students and teachers during feedback co-construction. Students needed to self-evaluate performance against task criteria, identify inadequacies and communicate their needs to teachers. During feedback exchanges, they interpreted teachers’ comments, generated self-feedback through reflection, and co-developed improvement plans with teachers. Teachers had to increase students’ understanding of criteria and evaluation skills and create opportunities for students’ expression of feedback needs. In feedback dialogue, teachers responded to students’ needs and advised on the improvement plans. Feedback partnerships would be best grounded in an environment with trust and psychological safety. Four strategies could facilitate feedback co-construction. First, students’ understanding of task criteria could be increased by rubrics explanation and exemplar analysis. Second, students could sharpen evaluation skills if they participated in peer review and received teacher feedback on the quality of peer feedback. Third, provision of self-evaluation checklists and prompts and teacher modeling of self-assessment process could aid students in articulating feedback needs. Fourth, the trust could be fostered when teachers explained the benefits of feedback co-construction, showed empathy, and provided personalised comments in dialogue. Some strategies were applied in interactive cover sheets in which students performed self-evaluation and made feedback requests on a cover sheet during assignment submission, followed by teachers’ response to individuals’ requests. The significance of this presentation lies in unpacking the conceptual framework of feedback partnerships and outlining feedback co-construction strategies. With a solid foundation in theory and practice, researchers and teachers could better enhance students’ engagement with feedback.Keywords: conceptual framework, feedback co-construction, feedback partnerships, implementation strategies
Procedia PDF Downloads 91102 Shifting Paradigms for Micro, Small, and Medium Enterprises in the Global Construction Market: The Crucial Roles of Technology and Sustainability
Authors: Sohrab Donyavi
Abstract:
The global construction market is experiencing significant shifts, particularly for micro, small, and medium enterprises (MSMEs), driven by the dual imperatives of technological advancement and sustainability. MSMEs play a crucial role in the construction industry, often being the backbone of economic development and fostering entrepreneurial skills. However, their dominance has also led to industry fragmentation and challenges such as technological lag and declining profit margins, which threaten their global competitiveness. This paper explores the integration of technology and sustainability in reshaping the paradigms for MSMEs in the construction sector. The adoption of advanced technologies, such as building information modeling (BIM) and AI, are pivotal for promoting sustainable construction practices. These tools enable MSMEs to design and construct environmentally responsible buildings, thereby contributing to the industry's sustainability goals. The research highlights that achieving sustainability in construction involves significant efforts in conservation, recycling, and the development of new materials and technologies. This approach aligns with the broader goal of integrating economic, environmental, and social aims into firm objectives to create long-term value while ensuring the protection of natural resources for future generations. Critical factors for implementing sustainable oriented innovation (SOI) practices in MSMEs include top management support, government initiatives, and financial resources. These factors are essential for fostering an environment conducive to innovation and sustainability. Furthermore, the empowerment of MSMEs through improved governance, market-oriented programs, sustainable productivity growth, and access to financing is vital. In developing regions like Indonesia, these strategies are crucial for enabling MSMEs to thrive in the face of globalization. The tendency of large firms to grow larger with the help of technology and globalization has led to the emergence of a high-technology oligopoly, posing a significant challenge to traditional construction practices. This shift necessitates that MSMEs adapt by leveraging technology and embracing sustainable practices to remain competitive. The research underscores the importance of integrating technology and sustainability not only as a competitive strategy but also as a means to contribute to the global effort of environmental conservation and sustainable development. This paper concludes that the successful integration of technology and sustainability in MSMEs requires a multifaceted approach. It involves the adoption of advanced technological tools, strong support from top management, proactive government policies, and access to financial resources. By addressing these factors, MSMEs can overcome the challenges of industry fragmentation, technological lag, and declining profit margins. Ultimately, this integration will enable MSMEs to play a pivotal role in driving the construction industry towards a more sustainable and technologically advanced future. The findings and recommendations are based on a comprehensive case study utilizing semi-structured interviews, observations, questionnaires, and document reviews.Keywords: MSMEs, construction, technology, sustainability, innovation
Procedia PDF Downloads 41101 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities
Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto
Abstract:
The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP
Procedia PDF Downloads 91100 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 8699 Medical Examiner Collection of Comprehensive, Objective Medical Evidence for Conducted Electrical Weapons and Their Temporal Relationship to Sudden Arrest
Authors: Michael Brave, Mark Kroll, Steven Karch, Charles Wetli, Michael Graham, Sebastian Kunz, Dorin Panescu
Abstract:
Background: Conducted electrical weapons (CEW) are now used in 107 countries and are a common law enforcement less-lethal force practice in the United Kingdom (UK), United States of America (USA), Canada, Australia, New Zealand, and others. Use of these devices is rarely temporally associated with the occurrence of sudden arrest-related deaths (ARD). Because such deaths are uncommon, few Medical Examiners (MEs) ever encounter one, and even fewer offices have established comprehensive investigative protocols. Without sufficient scientific data, the role, if any, played by a CEW in a given case is largely supplanted by conjecture often defaulting to a CEW-induced fatal cardiac arrhythmia. In addition to the difficulty in investigating individual deaths, the lack of information also detrimentally affects being able to define and evaluate the ARD cohort generally. More comprehensive, better information leads to better interpretation in individual cases and also to better research. The purpose of this presentation is to provide MEs with a comprehensive evidence-based checklist to assist in the assessment of CEW-ARD cases. Methods: PUBMED and Sociology/Criminology data bases were queried to find all medical, scientific, electrical, modeling, engineering, and sociology/criminology peer-reviewed literature for mentions of CEW or synonymous terms. Each paper was then individually reviewed to identify those that discussed possible bioelectrical mechanisms relating CEW to ARD. A Naranjo-type pharmacovigilance algorithm was also employed, when relevant, to identify and quantify possible direct CEW electrical myocardial stimulation. Additionally, CEW operational manuals and training materials were reviewed to allow incorporation of CEW-specific technical parameters. Results: Total relevant PUBMED citations of CEWs were less than 250, and reports of death extremely rare. Much relevant information was available from Sociology/Criminology data bases. Once the relevant published papers were identified, and reviewed, we compiled an annotated checklist of data that we consider critical to a thorough CEW-involved ARD investigation. Conclusion: We have developed an evidenced-based checklist that can be used by MEs and their staffs to assist them in identifying, collecting, documenting, maintaining, and objectively analyzing the role, if any, played by a CEW in any specific case of sudden death temporally associated with the use of a CEW. Even in cases where the collected information is deemed by the ME as insufficient for formulating an opinion or diagnosis to a reasonable degree of medical certainty, information collected as per the checklist will often be adequate for other stakeholders to use as a basis for informed decisions. Having reviewed the appropriate materials in a significant number of cases careful examination of the heart and brain is likely adequate. Channelopathy testing should be considered in some cases, however it may be considered cost prohibitive (aprox $3000). Law enforcement agencies may want to consider establishing a reserve fund to help manage such rare cases. The expense may stay the enormous costs associated with incident-precipitated litigation.Keywords: ARD, CEW, police, TASER
Procedia PDF Downloads 34798 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes
Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai
Abstract:
Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)
Procedia PDF Downloads 14297 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 12196 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 23495 An Empirical Study of Determinants Influencing Telemedicine Services Acceptance by Healthcare Professionals: Case of Selected Hospitals in Ghana
Authors: Jonathan Kissi, Baozhen Dai, Wisdom W. K. Pomegbe, Abdul-Basit Kassim
Abstract:
Protecting patient’s digital information is a growing concern for healthcare institutions as people nowadays perpetually live their lives through telemedicine services. These telemedicine services have been confronted with several determinants that hinder their successful implementations, especially in developing countries. Identifying such determinants that influence the acceptance of telemedicine services is also a problem for healthcare professionals. Despite the tremendous increase in telemedicine services, its adoption, and use has been quite slow in some healthcare settings. Generally, it is accepted in today’s globalizing world that the success of telemedicine services relies on users’ satisfaction. Satisfying health professionals and patients are one of the crucial objectives of telemedicine success. This study seeks to investigate the determinants that influence health professionals’ intention to utilize telemedicine services in clinical activities in a sub-Saharan African country in West Africa (Ghana). A hybridized model comprising of health adoption models, including technology acceptance theory, diffusion of innovation theory, and protection of motivation theory, were used to investigate these quandaries. The study was carried out in four government health institutions that apply and regulate telemedicine services in their clinical activities. A structured questionnaire was developed and used for data collection. Purposive and convenience sampling methods were used in the selection of healthcare professionals from different medical fields for the study. The collected data were analyzed based on structural equation modeling (SEM) approach. All selected constructs showed a significant relationship with health professional’s behavioral intention in the direction expected from prior literature including perceived usefulness, perceived ease of use, management strategies, financial sustainability, communication channels, patients security threat, patients privacy risk, self efficacy, actual service use, user satisfaction, and telemedicine services systems securities threat. Surprisingly, user characteristics and response efficacy of health professionals were not significant in the hybridized model. The findings and insights from this research show that health professionals are pragmatic when making choices for technology applications and also their willingness to use telemedicine services. They are, however, anxious about its threats and coping appraisals. The identified significant constructs in the study may help to increase efficiency, quality of services, quality patient care delivery, and satisfactory user satisfaction among healthcare professionals. The implantation and effective utilization of telemedicine services in the selected hospitals will aid as a strategy to eradicate hardships in healthcare services delivery. The service will help attain universal health access coverage to all populace. This study contributes to empirical knowledge by identifying the vital factors influencing health professionals’ behavioral intentions to adopt telemedicine services. The study will also help stakeholders of healthcare to formulate better policies towards telemedicine service usage.Keywords: telemedicine service, perceived usefulness, perceived ease of use, management strategies, security threats
Procedia PDF Downloads 14294 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 17793 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 196