Search results for: Verification and Validation (V&V)
56 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP
Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang
Abstract:
Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species
Procedia PDF Downloads 6855 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method
Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner
Abstract:
In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming
Procedia PDF Downloads 22154 Early Melt Season Variability of Fast Ice Degradation Due to Small Arctic Riverine Heat Fluxes
Authors: Grace E. Santella, Shawn G. Gallaher, Joseph P. Smith
Abstract:
In order to determine the importance of small-system riverine heat flux on regional landfast sea ice breakup, our study explores the annual spring freshet of the Sagavanirktok River from 2014-2019. Seasonal heat cycling ultimately serves as the driving mechanism behind the freshet; however, as an emerging area of study, the extent to which inland thermodynamics influence coastal tundra geomorphology and connected landfast sea ice has not been extensively investigated in relation to small-scale Arctic river systems. The Sagavanirktok River is a small-to-midsized river system that flows south-to-north on the Alaskan North Slope from the Brooks mountain range to the Beaufort Sea at Prudhoe Bay. Seasonal warming in the spring rapidly melts snow and ice in a northwards progression from the Brooks Range and transitional tundra highlands towards the coast and when coupled with seasonal precipitation, results in a pulsed freshet that propagates through the Sagavanirktok River. The concentrated presence of newly exposed vegetation in the transitional tundra region due to spring melting results in higher absorption of solar radiation due to a lower albedo relative to snow-covered tundra and/or landfast sea ice. This results in spring flood runoff that advances over impermeable early-season permafrost soils with elevated temperatures relative to landfast sea ice and sub-ice flow. We examine the extent to which interannual temporal variability influences the onset and magnitude of river discharge by analyzing field measurements from the United States Geological Survey (USGS) river and meteorological observation sites. Rapid influx of heat to the Arctic Ocean via riverine systems results in a noticeable decay of landfast sea ice independent of ice breakup seaward of the shear zone. Utilizing MODIS imagery from NASA’s Terra satellite, interannual variability of river discharge is visualized, allowing for optical validation that the discharge flow is interacting with landfast sea ice. Thermal erosion experienced by sediment fast ice at the arrival of warm overflow preconditions the ice regime for rapid thawing. We investigate the extent to which interannual heat flux from the Sagavanirktok River’s freshet significantly influences the onset of local landfast sea ice breakup. The early-season warming of atmospheric temperatures is evidenced by the presence of storms which introduce liquid, rather than frozen, precipitation into the system. The resultant decreased albedo of the transitional tundra supports the positive relationship between early-season precipitation events, inland thermodynamic cycling, and degradation of landfast sea ice. Early removal of landfast sea ice increases coastal erosion in these regions and has implications for coastline geomorphology which stress industrial, ecological, and humanitarian infrastructure.Keywords: Albedo, freshet, landfast sea ice, riverine heat flux, seasonal heat cycling
Procedia PDF Downloads 13253 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 14952 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach
Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz
Abstract:
Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.Keywords: machine learning, noise reduction, preterm birth, sleep habit
Procedia PDF Downloads 14951 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion
Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro
Abstract:
In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment
Procedia PDF Downloads 4850 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques
Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu
Abstract:
Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare
Procedia PDF Downloads 6749 Phage Therapy of Staphylococcal Pyoderma in Dogs
Authors: Jiri Nepereny, Vladimir Vrzal
Abstract:
Staphylococcus intermedius/pseudintermedius bacteria are commonly found on the skin of healthy dogs and can cause pruritic skin diseases under certain circumstances (trauma, allergy, immunodeficiency, ectoparasitosis, endocrinological diseases, glucocorticoid therapy, etc.). These can develop into complicated superficial or deep pyoderma, which represent a large group of problematic skin diseases in dogs. These are predominantly inflammations of a secondary nature, associated with the occurrence of coagulase-positive Staphylococcus spp. A major problem is increased itching, which greatly complicates the healing process. The aim of this work is to verify the efficacy of the developed preparation Bacteriophage SI (Staphylococcus intermedius). The tested preparation contains a lysate of bacterial cells of S. intermedius host culture including culture medium and live virions of specific phage. Sodium Merthiolate is added as a preservative in a safe concentration. Validation of the efficacy of the product was demonstrated by monitoring the therapeutic effect after application to indicated cases from clinical practice. The indication for inclusion of the patient into the trial was an adequate history and clinical examination accompanied by sample collection for bacteriological examination and isolation of the specific causative agent. Isolate identification was performed by API BioMérieux identification system (API ID 32 STAPH) and rep-PCR typing. The suitability of therapy for a specific case was confirmed by in vitro testing of the lytic ability of the bacteriophage to lyse the specific isolate = formation of specific plaques on the culture isolate on the surface of the solid culture medium. So far, a total of 32 dogs of different sexes, ages and breed affiliations with different symptoms of staphylococcal dermatitis have been included in the testing. Their previous therapy consisted of more or less successful systemic or local application of broad-spectrum antibiotics. The presence of S. intermedius/pseudintermedius has been demonstrated in 26 cases. The isolates were identified as a S. pseudintermedius, in all cases. Contaminant bacterial microflora was always present in the examined samples. The test product was applied subcutaneously in gradually increasing doses over a period of 1 month. After improvement in health status, maintenance therapy was followed by application of the product once a week for 3 months. Adverse effects associated with the administration of the product (swelling at the site of application) occurred in only 2 cases. In all cases, there was a significant reduction in clinical signs (healing of skin lesions and reduction of inflammation) after therapy and an improvement in the well-being of the treated animals. A major problem in the treatment of pyoderma is the frequent resistance of the causative agents to antibiotics, especially the increasing frequency of multidrug-resistant and methicillin-resistant S. pseudintermedius (MRSP) strains. Specific phagolysate using for the therapy of these diseases could solve this problem and to some extent replace or reduce the use of antibiotics, whose frequent and widespread application often leads to the emergence of resistance. The advantage of the therapeutic use of bacteriophages is their bactericidal effect, high specificity and safety. This work was supported by Project FV40213 from Ministry of Industry and Trade, Czech Republic.Keywords: bacteriophage, pyoderma, staphylococcus spp, therapy
Procedia PDF Downloads 17348 A Single Cell Omics Experiments as Tool for Benchmarking Bioinformatics Oncology Data Analysis Tools
Authors: Maddalena Arigoni, Maria Luisa Ratto, Raffaele A. Calogero, Luca Alessandri
Abstract:
The presence of tumor heterogeneity, where distinct cancer cells exhibit diverse morphological and phenotypic profiles, including gene expression, metabolism, and proliferation, poses challenges for molecular prognostic markers and patient classification for targeted therapies. Understanding the causes and progression of cancer requires research efforts aimed at characterizing heterogeneity, which can be facilitated by evolving single-cell sequencing technologies. However, analyzing single-cell data necessitates computational methods that often lack objective validation. Therefore, the establishment of benchmarking datasets is necessary to provide a controlled environment for validating bioinformatics tools in the field of single-cell oncology. Benchmarking bioinformatics tools for single-cell experiments can be costly due to the high expense involved. Therefore, datasets used for benchmarking are typically sourced from publicly available experiments, which often lack a comprehensive cell annotation. This limitation can affect the accuracy and effectiveness of such experiments as benchmarking tools. To address this issue, we introduce omics benchmark experiments designed to evaluate bioinformatics tools to depict the heterogeneity in single-cell tumor experiments. We conducted single-cell RNA sequencing on six lung cancer tumor cell lines that display resistant clones upon treatment of EGFR mutated tumors and are characterized by driver genes, namely ROS1, ALK, HER2, MET, KRAS, and BRAF. These driver genes are associated with downstream networks controlled by EGFR mutations, such as JAK-STAT, PI3K-AKT-mTOR, and MEK-ERK. The experiment also featured an EGFR-mutated cell line. Using 10XGenomics platform with cellplex technology, we analyzed the seven cell lines together with a pseudo-immunological microenvironment consisting of PBMC cells labeled with the Biolegend TotalSeq™-B Human Universal Cocktail (CITEseq). This technology allowed for independent labeling of each cell line and single-cell analysis of the pooled seven cell lines and the pseudo-microenvironment. The data generated from the aforementioned experiments are available as part of an online tool, which allows users to define cell heterogeneity and generates count tables as an output. The tool provides the cell line derivation for each cell and cell annotations for the pseudo-microenvironment based on CITEseq data by an experienced immunologist. Additionally, we created a range of pseudo-tumor tissues using different ratios of the aforementioned cells embedded in matrigel. These tissues were analyzed using 10XGenomics (FFPE samples) and Curio Bioscience (fresh frozen samples) platforms for spatial transcriptomics, further expanding the scope of our benchmark experiments. The benchmark experiments we conducted provide a unique opportunity to evaluate the performance of bioinformatics tools for detecting and characterizing tumor heterogeneity at the single-cell level. Overall, our experiments provide a controlled and standardized environment for assessing the accuracy and robustness of bioinformatics tools for studying tumor heterogeneity at the single-cell level, which can ultimately lead to more precise and effective cancer diagnosis and treatment.Keywords: single cell omics, benchmark, spatial transcriptomics, CITEseq
Procedia PDF Downloads 11947 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 27746 Development and Evaluation of a Cognitive Behavioural Therapy Based Smartphone App for Low Moods and Anxiety
Authors: David Bakker, Nikki Rickard
Abstract:
Smartphone apps hold immense potential as mental health and wellbeing tools. Support can be made easily accessible and can be used in real-time while users are experiencing distress. Furthermore, data can be collected to enable machine learning and automated tailoring of support to users. While many apps have been developed for mental health purposes, few have adhered to evidence-based recommendations and even fewer have pursued experimental validation. This paper details the development and experimental evaluation of an app, MoodMission, that aims to provide support for low moods and anxiety, help prevent clinical depression and anxiety disorders, and serve as an adjunct to professional clinical supports. MoodMission was designed to deliver cognitive behavioural therapy for specifically reported problems in real-time, momentary interactions. Users report their low moods or anxious feelings to the app along with a subjective units of distress scale (SUDS) rating. MoodMission then provides a choice of 5-10 short, evidence-based mental health strategies called Missions. Users choose a Mission, complete it, and report their distress again. Automated tailoring, gamification, and in-built data collection for analysis of effectiveness was also included in the app’s design. The development process involved construction of an evidence-based behavioural plan, designing of the app, building and testing procedures, feedback-informed changes, and a public launch. A randomized controlled trial (RCT) was conducted comparing MoodMission to two other apps and a waitlist control condition. Participants completed measures of anxiety, depression, well-being, emotional self-awareness, coping self-efficacy and mental health literacy at the start of their app use and 30 days later. At the time of submission (November 2016) over 300 participants have participated in the RCT. Data analysis will begin in January 2017. At the time of this submission, MoodMission has over 4000 users. A repeated-measures ANOVA of 1390 completed Missions reveals that SUDS (0-10) ratings were significantly reduced between pre-Mission ratings (M=6.20, SD=2.39) and post-Mission ratings (M=4.93, SD=2.25), F(1,1389)=585.86, p < .001, np2=.30. This effect was consistent across both low moods and anxiety. Preliminary analyses of the data from the outcome measures surveys reveal improvements across mental health and wellbeing measures as a result of using the app over 30 days. This includes a significant increase in coping self-efficacy, F(1,22)=5.91, p=.024, np2=.21. Complete results from the RCT in which MoodMission was evaluated will be presented. Results will also be presented from the continuous outcome data being recorded by MoodMission. MoodMission was successfully developed and launched, and preliminary analysis suggest that it is an effective mental health and wellbeing tool. In addition to the clinical applications of MoodMission, the app holds promise as a research tool to conduct component analysis of psychological therapies and overcome restraints of laboratory based studies. The support provided by the app is discrete, tailored, evidence-based, and transcends barriers of stigma, geographic isolation, financial limitations, and low health literacy.Keywords: anxiety, app, CBT, cognitive behavioural therapy, depression, eHealth, mission, mobile, mood, MoodMission
Procedia PDF Downloads 27245 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools
Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal
Abstract:
The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.Keywords: sustainability, electric island, IOT, smart building
Procedia PDF Downloads 18044 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing
Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas
Abstract:
This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language
Procedia PDF Downloads 40743 Implementation of a Web-Based Clinical Outcomes Monitoring and Reporting Platform across the Fortis Network
Authors: Narottam Puri, Bishnu Panigrahi, Narayan Pendse
Abstract:
Background: Clinical Outcomes are the globally agreed upon, evidence-based measurable changes in health or quality of life resulting from the patient care. Reporting of outcomes and its continuous monitoring provides an opportunity for both assessing and improving the quality of patient care. In 2012, International Consortium Of HealthCare Outcome Measurement (ICHOM) was founded which has defined global Standard Sets for measuring the outcome of various treatments. Method: Monitoring of Clinical Outcomes was identified as a pillar of Fortis’ core value of Patient Centricity. The project was started as an in-house developed Clinical Outcomes Reporting Portal by the Fortis Medical IT team. Standard sets of Outcome measurement developed by ICHOM were used. A pilot was run at Fortis Escorts Heart Institute from Aug’13 – Dec’13.Starting Jan’14, it was implemented across 11 hospitals of the group. The scope was hospital-wide and major clinical specialties: Cardiac Sciences, Orthopedics & Joint Replacement were covered. The internally developed portal had its limitations of report generation and also capturing of Patient related outcomes was restricted. A year later, the company provisioned for an ICHOM Certified Software product which could provide a platform for data capturing and reporting to ensure compliance with all ICHOM requirements. Post a year of the launch of the software; Fortis Healthcare has become the 1st Healthcare Provider in Asia to publish Clinical Outcomes data for the Coronary Artery Disease Standard Set comprising of Coronary Artery Bypass Graft and Percutaneous Coronary Interventions) in the public domain. (Jan 2016). Results: This project has helped in firmly establishing a culture of monitoring and reporting Clinical Outcomes across Fortis Hospitals. Given the diverse nature of the healthcare delivery model at Fortis Network, which comprises of hospitals of varying size and specialty-mix and practically covering the entire span of the country, standardization of data collection and reporting methodology is a huge achievement in itself. 95% case reporting was achieved with more than 90% data completion at the end of Phase 1 (March 2016). Post implementation the group now has one year of data from its own hospitals. This has helped identify the gaps and plan towards ways to bridge them and also establish internal benchmarks for continual improvement. Besides the value created for the group includes: 1. Entire Fortis community has been sensitized on the importance of Clinical Outcomes monitoring for patient centric care. Initial skepticism and cynicism has been countered by effective stakeholder engagement and automation of processes. 2. Measuring quality is the first step in improving quality. Data analysis has helped compare clinical results with best-in-class hospitals and identify improvement opportunities. 3. Clinical fraternity is extremely pleased to be part of this initiative and has taken ownership of the project. Conclusion: Fortis Healthcare is the pioneer in the monitoring of Clinical Outcomes. Implementation of ICHOM standards has helped Fortis Clinical Excellence Program in improving patient engagement and strengthening its commitment to its core value of Patient Centricity. Validation and certification of the Clinical Outcomes data by an ICHOM Certified Supplier adds confidence to its claim of being leaders in this space.Keywords: clinical outcomes, healthcare delivery, patient centricity, ICHOM
Procedia PDF Downloads 23942 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 1441 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 19640 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 4939 Comparing Practices of Swimming in the Netherlands against a Global Model for Integrated Development of Mass and High Performance Sport: Perceptions of Coaches
Authors: Melissa de Zeeuw, Peter Smolianov, Arnold Bohl
Abstract:
This study was designed to help and improve international performance as well increase swimming participation in the Netherlands. Over 200 sources of literature on sport delivery systems from 28 Australasian, North and South American, Western and Eastern European countries were analyzed to construct a globally applicable model of high performance swimming integrated with mass participation, comprising of the following seven elements and three levels: Micro level (operations, processes, and methodologies for development of individual athletes): 1. Talent search and development, 2. Advanced athlete support. Meso level (infrastructures, personnel, and services enabling sport programs): 3. Training centers, 4. Competition systems, 5. Intellectual services. Macro level (socio-economic, cultural, legislative, and organizational): 6. Partnerships with supporting agencies, 7. Balanced and integrated funding and structures of mass and elite sport. This model emerged from the integration of instruments that have been used to analyse and compare national sport systems. The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. It has recently been accepted as a model for further understanding North American sport systems, including (in chronological order of publications) US rugby, tennis, soccer, swimming and volleyball. The above model was used to design a questionnaire of 42 statements reflecting desired practices. The statements were validated by 12 international experts, including executives from sport governing bodies, academics who published on high performance and sport development, and swimming coaches and administrators. In this study both a highly structured and open ended qualitative analysis tools were used. This included a survey of swim coaches where open responses accompanied structured questions. After collection of the surveys, semi-structured discussions with Federation coaches were conducted to add triangulation to the findings. Lastly, a content analysis of Dutch Swimming’s website and organizational documentation was conducted. A representative sample of 1,600 Dutch Swim coaches and administrators was collected via email addresses from Royal Dutch Swimming Federation' database. Fully completed questionnaires were returned by 122 coaches from all key country’s regions for a response rate of 7,63% - higher than the response rate of the previously mentioned US studies which used the same model and method. Results suggest possible enhancements at macro level (e.g., greater public and corporate support to prepare and hire more coaches and to address the lack of facilities, monies and publicity at mass participation level in order to make swimming affordable for all), at meso level (e.g., comprehensive education for all coaches and full spectrum of swimming pools particularly 50 meters long), and at micro level (e.g., better preparation of athletes for a future outside swimming and better use of swimmers to stimulate swimming development). Best Dutch swimming management practices (e.g., comprehensive support to most talented swimmers who win Olympic medals) as well as relevant international practices available for transfer to the Netherlands (e.g., high school competitions) are discussed.Keywords: sport development, high performance, mass participation, swimming
Procedia PDF Downloads 20638 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution
Authors: Abderrazak Bannari
Abstract:
Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing
Procedia PDF Downloads 22937 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach
Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira
Abstract:
Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers
Procedia PDF Downloads 15736 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World
Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber
Abstract:
Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.Keywords: semantic segmentation, urban environment, deep learning, urban building, classification
Procedia PDF Downloads 19235 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers
Authors: Margarita Dufresne
Abstract:
This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel
Procedia PDF Downloads 7234 Numerical Modeling of Phase Change Materials Walls under Reunion Island's Tropical Weather
Authors: Lionel Trovalet, Lisa Liu, Dimitri Bigot, Nadia Hammami, Jean-Pierre Habas, Bruno Malet-Damour
Abstract:
The MCP-iBAT1 project is carried out to study the behavior of Phase Change Materials (PCM) integrated in building envelopes in a tropical environment. Through the phase transitions (melting and freezing) of the material, thermal energy can be absorbed or released. This process enables the regulation of indoor temperatures and the improvement of thermal comfort for the occupants. Most of the commercially available PCMs are more suitable to temperate climates than to tropical climates. The case of Reunion Island is noteworthy as there are multiple micro-climates. This leads to our key question: developing one or multiple bio-based PCMs that cover the thermal needs of the different locations of the island. The present paper focuses on the numerical approach to select the PCM properties relevant to tropical areas. Numerical simulations have been carried out with two softwares: EnergyPlusTM and Isolab. The latter has been developed in the laboratory, with the implicit Finite Difference Method, in order to evaluate different physical models. Both are Thermal Dynamic Simulation (TDS) softwares that predict the building’s thermal behavior with one-dimensional heat transfers. The parameters used in this study are the construction’s characteristics (dimensions and materials) and the environment’s description (meteorological data and building surroundings). The building is modeled in accordance with the experimental setup. It is divided into two rooms, cells A and B, with same dimensions. Cell A is the reference, while in cell B, a layer of commercial PCM (Thermo Confort of MCI Technologies) has been applied to the inner surface of the North wall. Sensors are installed in each room to retrieve temperatures, heat flows, and humidity rates. The collected data are used for the comparison with the numerical results. Our strategy is to implement two similar buildings at different altitudes (Saint-Pierre: 70m and Le Tampon: 520m) to measure different temperature ranges. Therefore, we are able to collect data for various seasons during a condensed time period. The following methodology is used to validate the numerical models: calibration of the thermal and PCM models in EnergyPlusTM and Isolab based on experimental measures, then numerical testing with a sensitivity analysis of the parameters to reach the targeted indoor temperatures. The calibration relies on the past ten months’ measures (from September 2020 to June 2021), with a focus on one-week study on November (beginning of summer) when the effect of PCM on inner surface temperatures is more visible. A first simulation with the PCM model of EnergyPlus gave results approaching the measurements with a mean error of 5%. The studied property in this paper is the melting temperature of the PCM. By determining the representative temperature of winter, summer and inter-seasons with past annual’s weather data, it is possible to build a numerical model of multi-layered PCM. Hence, the combined properties of the materials will provide an optimal scenario for the application on PCM in tropical areas. Future works will focus on the development of bio-based PCMs with the selected properties followed by experimental and numerical validation of the materials. 1Materiaux ´ a Changement de Phase, une innovation pour le B ` ati TropicalKeywords: energyplus, multi-layer of PCM, phase changing materials, tropical area
Procedia PDF Downloads 9533 Finite Element Simulation of Four Point Bending of Laminated Veneer Lumber (LVL) Arch
Authors: Eliska Smidova, Petr Kabele
Abstract:
This paper describes non-linear finite element simulation of laminated veneer lumber (LVL) under tensile and shear loads that induce cracking along fibers. For this purpose, we use 2D homogeneous orthotropic constitutive model of tensile and shear fracture in timber that has been recently developed and implemented into ATENA® finite element software by the authors. The model captures (i) material orthotropy for small deformations in both linear and non-linear range, (ii) elastic behavior until anisotropic failure criterion is fulfilled, (iii) inelastic behavior after failure criterion is satisfied, (iv) different post-failure response for cracks along and across the grain, (v) unloading/reloading behavior. The post-cracking response is treated by fixed smeared crack model where Reinhardt-Hordijk function is used. The model requires in total 14 input parameters that can be obtained from standard tests, off-axis test results and iterative numerical simulation of compact tension (CT) or compact tension-shear (CTS) test. New engineered timber composites, such as laminated veneer lumber (LVL), offer improved structural parameters compared to sawn timber. LVL is manufactured by laminating 3 mm thick wood veneers aligned in one direction using water-resistant adhesives (e.g. polyurethane). Thus, 3 main grain directions, namely longitudinal (L), tangential (T), and radial (R), are observed within the layered LVL product. The core of this work consists in 3 numerical simulations of experiments where Radiata Pine LVL and Yellow Poplar LVL were involved. The first analysis deals with calibration and validation of the proposed model through off-axis tensile test (at a load-grain angle of 0°, 10°, 45°, and 90°) and CTS test (at a load-grain angle of 30°, 60°, and 90°), both of which were conducted for Radiata Pine LVL. The second finite element simulation reproduces load-CMOD curve of compact tension (CT) test of Yellow Poplar with the aim of obtaining cohesive law parameters to be used as an input in the third finite element analysis. That is four point bending test of small-size arch of 780 mm span that is made of Yellow Poplar LVL. The arch is designed with a through crack between two middle layers in the crown. Curved laminated beams are exposed to high radial tensile stress compared to timber strength in radial tension in the crown area. Let us note that in this case the latter parameter stands for tensile strength in perpendicular direction with respect to the grain. Standard tests deliver most of the relevant input data whereas traction-separation law for crack along the grain can be obtained partly by inverse analysis of compact tension (CT) test or compact tension-shear test (CTS). The initial crack was modeled as a narrow gap separating two layers in the middle the arch crown. Calculated load-deflection curve is in good agreement with the experimental ones. Furthermore, crack pattern given by numerical simulation coincides with the most important observed crack paths.Keywords: compact tension (CT) test, compact tension shear (CTS) test, fixed smeared crack model, four point bending test, laminated arch, laminated veneer lumber LVL, off-axis test, orthotropic elasticity, orthotropic fracture criterion, Radiata Pine LVL, traction-separation law, yellow poplar LVL, 2D constitutive model
Procedia PDF Downloads 29032 De-convolution Based IVIVC Correlation for Tacrolimus ER Tablet (Narrow Therapeutic Index Drug) With Widening of Dissolution Prediction for Virtual Bioequivalence
Authors: Sajad Khaliq Dar, Dipanjan Goswami, Arshad H. Khuroo, Mohd. Akhtar, Pulak Kumar Metia, Sudershan Kumar
Abstract:
Background: Development of modified-release oral dosage formulations (OSD) like tacrolimus in narrow therapeutic categories, together with high levels of intra-individual variability, impose greater challenges. The risk assessment for bioequivalence studies requires developing a suitable design through pilot studies involving the comparison of multiple formulations of the same product with a marketed product to understand the in-vivo behaviour. These formulations could have varying coating levels and other minor quantitative differences to achieve the desired release rate for the final product. Although small-scale studies are critical before the conduct of full-scale Pharmacokinetic (PK) studies, regulatory agencies evaluate critical bioavailability attributes (CBA) before approving the submitted dossiers. Since Tacrolimus is a BCS Class II drug, therefore developing the extended-release formulation, in addition to associated challenges, provides an opportunity to present the In vitro-in vivo correlations (IVIVC) to regulatory agencies, not only to exhibit product quality but also to reduce the burden of additional human trials and cost involved to them for bringing the product to market. Objective: The objective of this study was to develop a Level-A In vitro - In vivo Correlation (IVIVC) model for Sun Pharma’s test formulation Tacrolimus ER tablet 4mg and extend its application to a widened dissolution window of 25% at 2.5 hours (critical release time) sampling time point. Experimental Procedure: Post the conduct of two in-vivo studies, a pilot study evaluating two test prototypes on 24 subjects (under fasting) and a pivotal study having 50 subjects (under fasting), the observed pharmacokinetic profile was used for IVIVC model development. The dissolution media used was 0.005% HPC + 0.25% SLS in Water 900 mL at pH 4.50 using USP II (Paddle) apparatus with alternative sinkers operated at 100 RPM. The sampling time points were chosen to mimic the drug absorption in vivo. The dissolution best fit to data was obtained using Makoid Banakar kinetics. Then deconvolution, anchoring to concepts of the single compartment by Wagner Nelson method was applied for tacrolimus slow-release formulation batch with film coating weight build-up of 5.4% (used in pilot bio study), medium release with Hypromellose (retard-release exhibit batch used in the pivotal study) and fast release formulation batch with film coating weight build-up of 5.05% (used in pilot bio study). Results and Conclusion: The results were deemed acceptable as prediction errors for internal and external validation were < 3% depicting in-vitro drug release mimics in-vivo absorption. Moreover, the prediction result for the Test/Reference ratio was <15% for all test formulations and widening dissolution (i.e., 39%-64% drug release at 2.5hrs) predictions were well within 80-125% when compared against Envarsus XR (reference drug). This IVIVC-validated model can be used in the futuristic exploration of dose titration with 1mg tacrolimus ER OSD as a surrogate for In-vivo bioequivalence trials.Keywords: pharmacokinetics, BCS, oral dosage form, Bioavailability, intra-individual variability
Procedia PDF Downloads 631 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition
Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan
Abstract:
Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models
Procedia PDF Downloads 34230 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator
Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra
Abstract:
We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics
Procedia PDF Downloads 14329 Horizontal Cooperative Game Theory in Hotel Revenue Management
Authors: Ririh Rahma Ratinghayu, Jayu Pramudya, Nur Aini Masruroh, Shi-Woei Lin
Abstract:
This research studies pricing strategy in cooperative setting of hotel duopoly selling perishable product under fixed capacity constraint by using the perspective of managers. In hotel revenue management, competitor’s average room rate and occupancy rate should be taken into manager’s consideration in determining pricing strategy to generate optimum revenue. This information is not provided by business intelligence or available in competitor’s website. Thus, Information Sharing (IS) among players might result in improved performance of pricing strategy. IS is widely adopted in the logistics industry, but IS within hospitality industry has not been well-studied. This research put IS as one of cooperative game schemes, besides Mutual Price Setting (MPS) scheme. In off-peak season, hotel manager arranges pricing strategy to offer promotion package and various kinds of discounts up to 60% of full-price to attract customers. Competitor selling homogenous product will react the same, then triggers a price war. Price war which generates lower revenue may be avoided by creating collaboration in pricing strategy to optimize payoff for both players. In MPS cooperative game, players collaborate to set a room rate applied for both players. Cooperative game may avoid unfavorable players’ payoff caused by price war. Researches on horizontal cooperative game in logistics show better performance and payoff for the players, however, horizontal cooperative game in hotel revenue management has not been demonstrated. This paper aims to develop hotel revenue management models under duopoly cooperative schemes (IS & MPS), which are compared to models under non-cooperative scheme too. Each scheme has five models, Capacity Allocation Model; Demand Model; Revenue Model; Optimal Price Model; and Equilibrium Price Model. Capacity Allocation Model and Demand Model employs self-hotel and competitor’s full and discount price as predictors under non-linear relation. Optimal price is obtained by assuming revenue maximization motive. Equilibrium price is observed by interacting self-hotel’s and competitor’s optimal price under reaction equation. Equilibrium is analyzed using game theory approach. The sequence applies for three schemes. MPS Scheme differently aims to optimize total players’ payoff. The case study in which theoretical models are applied observes two hotels offering homogenous product in Indonesia during a year. The Capacity Allocation, Demand, and Revenue Models are built using multiple regression and statistically tested for validation. Case study data confirms that price behaves within demand model in a non-linear manner. IS Models can represent the actual demand and revenue data better than Non-IS Models. Furthermore, IS enables hotels to earn significantly higher revenue. Thus, duopoly hotel players in general, might have reasonable incentives to share information horizontally. During off-peak season, MPS Models are able to predict the optimal equal price for both hotels. However, Nash equilibrium may not always exist depending on actual payoff of adhering or betraying mutual agreement. To optimize performance, horizontal cooperative game may be chosen over non-cooperative game. Mathematical models can be used to detect collusion among business players. Empirical testing can be used as policy input for market regulator in preventing unethical business practices potentially harming society welfare.Keywords: horizontal cooperative game theory, hotel revenue management, information sharing, mutual price setting
Procedia PDF Downloads 28928 Digital Adoption of Sales Support Tools for Farmers: A Technology Organization Environment Framework Analysis
Authors: Sylvie Michel, François Cocula
Abstract:
Digital agriculture is an approach that exploits information and communication technologies. These encompass data acquisition tools like mobile applications, satellites, sensors, connected devices, and smartphones. Additionally, it involves transfer and storage technologies such as 3G/4G coverage, low-bandwidth terrestrial or satellite networks, and cloud-based systems. Furthermore, embedded or remote processing technologies, including drones and robots for process automation, along with high-speed communication networks accessible through supercomputers, are integral components of this approach. While farm-level adoption studies regarding digital agricultural technologies have emerged in recent years, they remain relatively limited in comparison to other agricultural practices. To bridge this gap, this study delves into understanding farmers' intention to adopt digital tools, employing the technology, organization, environment framework. A qualitative research design encompassed semi-structured interviews, totaling fifteen in number, conducted with key stakeholders both prior to and following the 2020-2021 COVID-19 lockdowns in France. Subsequently, the interview transcripts underwent thorough thematic content analysis, and the data and verbatim were triangulated for validation. A coding process aimed to systematically organize the data, ensuring an orderly and structured classification. Our research extends its contribution by delineating sub-dimensions within each primary dimension. A total of nine sub-dimensions were identified, categorized as follows: perceived usefulness for communication, perceived usefulness for productivity, and perceived ease of use constitute the first dimension; technological resources, financial resources, and human capabilities constitute the second dimension, while market pressure, institutional pressure, and the COVID-19 situation constitute the third dimension. Furthermore, this analysis enriches the TOE framework by incorporating entrepreneurial orientation as a moderating variable. Managerial orientation emerges as a pivotal factor influencing adoption intention, with producers acknowledging the significance of utilizing digital sales support tools to combat "greenwashing" and elevate their overall brand image. Specifically, it illustrates that producers recognize the potential of digital tools in time-saving and streamlining sales processes, leading to heightened productivity. Moreover, it highlights that the intent to adopt digital sales support tools is influenced by a market mimicry effect. Additionally, it demonstrates a negative association between the intent to adopt these tools and the pressure exerted by institutional partners. Finally, this research establishes a positive link between the intent to adopt digital sales support tools and economic fluctuations, notably during the COVID-19 pandemic. The adoption of sales support tools in agriculture is a multifaceted challenge encompassing three dimensions and nine sub-dimensions. The research delves into the adoption of digital farming technologies at the farm level through the TOE framework. This analysis provides significant insights beneficial for policymakers, stakeholders, and farmers. These insights are instrumental in making informed decisions to facilitate a successful digital transition in agriculture, effectively addressing sector-specific challenges.Keywords: adoption, digital agriculture, e-commerce, TOE framework
Procedia PDF Downloads 6127 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports
Authors: Carolyn Hatch, Laurent Simon
Abstract:
Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs
Procedia PDF Downloads 182