Search results for: traveling speed
36 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 13935 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 14934 Blood Lipid Management: Combined Treatment with Hydrotherapy and Ozone Bubbles Bursting in Water
Authors: M. M. Wickramasinghe
Abstract:
Cholesterol and triglycerides are lipids, mainly essential to maintain the cellular structure of the human body. Cholesterol is also important for hormone production, vitamin D production, proper digestion functions, and strengthening the immune system. Excess fats in the blood circulation, known as hyperlipidemia, become harmful leading to arterial clogging and causing atherosclerosis. Aim of this research is to develop a treatment protocol to efficiently break down and maintain circulatory lipids by improving blood circulation without strenuous physical exercises while immersed in a tub of water. To achieve the target of strong exercise effect, this method involves generating powerful ozone bubbles to spin, collide, and burst in the water. Powerful emission of air into water is capable of transferring locked energy of the water molecules and releasing energy. This method involves water and air-based impact generated by pumping ozone at the speed of 46 lts/sec with a concentration of 0.03-0.05 ppt according to safety standards of The Federal Institute for Drugs and Medical Devices, BfArM, Germany. The direct impact of ozone bubbles on the muscular system and skin becomes the main target and is capable of increasing the heart rate while immersed in water. A total time duration of 20 minutes is adequate to exert a strong exercise effect, improve blood circulation, and stimulate the nervous and endocrine systems. Unstable ozone breakdown into oxygen release onto the surface of the water giving additional benefits and supplying high-quality air rich in oxygen required to maintain efficient metabolic functions. The breathing technique was introduced to improve the efficiency of lung functions and benefit the air exchange mechanism. The temperature of the water is maintained at 39c to 40c to support arterial dilation and enzyme functions and efficiently improve blood circulation to the vital organs. The buoyancy of water and natural hydrostatic pressure release the tension of the body weight and relax the mind and body. Sufficient hydration (3lts of water per day) is an essential requirement to transport nutrients and remove waste byproducts to process through the liver, kidney, and skin. Proper nutritional intake is an added advantage to optimize the efficiency of this method which aids in a fast recovery process. Within 20-30 days of daily treatment, triglycerides, low-density lipoproteins (LDL), and total cholesterol reduction were observed in patients with abnormal levels of lipid profile. Borderline patients were cleared within 10–15 days of treatment. This is a highly efficient system that provides many benefits and is able to achieve a successful reduction of triglycerides, LDL, and total cholesterol within a short period of time. Supported by proper hydration and nutritional balance, this system of natural treatment maintains healthy levels of lipids in the blood and avoids the risk of cerebral stroke, high blood pressure, and heart attacks.Keywords: atherosclerosis, cholesterol, hydrotherapy, hyperlipidemia, lipid management, ozone therapy, triglycerides
Procedia PDF Downloads 9133 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool
Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad
Abstract:
In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling
Procedia PDF Downloads 26832 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors
Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang
Abstract:
Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve
Procedia PDF Downloads 4031 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study
Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang
Abstract:
Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media
Procedia PDF Downloads 8330 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 18029 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World
Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber
Abstract:
Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.Keywords: semantic segmentation, urban environment, deep learning, urban building, classification
Procedia PDF Downloads 19228 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers
Authors: Margarita Dufresne
Abstract:
This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel
Procedia PDF Downloads 7227 High Purity Lignin for Asphalt Applications: Using the Dawn Technology™ Wood Fractionation Process
Authors: Ed de Jong
Abstract:
Avantium is a leading technology development company and a frontrunner in renewable chemistry. Avantium develops disruptive technologies that enable the production of sustainable high value products from renewable materials and actively seek out collaborations and partnerships with like-minded companies and academic institutions globally, to speed up introductions of chemical innovations in the marketplace. In addition, Avantium helps companies to accelerate their catalysis R&D to improve efficiencies and deliver increased sustainability, growth, and profits, by providing proprietary systems and services to this regard. Many chemical building blocks and materials can be produced from biomass, nowadays mainly from 1st generation based carbohydrates, but potential for competition with the human food chain leads brand-owners to look for strategies to transition from 1st to 2nd generation feedstock. The use of non-edible lignocellulosic feedstock is an equally attractive source to produce chemical intermediates and an important part of the solution addressing these global issues (Paris targets). Avantium’s Dawn Technology™ separates the glucose, mixed sugars, and lignin available in non-food agricultural and forestry residues such as wood chips, wheat straw, bagasse, empty fruit bunches or corn stover. The resulting very pure lignin is dense in energy and can be used for energy generation. However, such a material might preferably be deployed in higher added value applications. Bitumen, which is fossil based, are mostly used for paving applications. Traditional hot mix asphalt emits large quantities of the GHG’s CO₂, CH₄, and N₂O, which is unfavorable for obvious environmental reasons. Another challenge for the bitumen industry is that the petrochemical industry is becoming more and more efficient in breaking down higher chain hydrocarbons to lower chain hydrocarbons with higher added value than bitumen. This has a negative effect on the availability of bitumen. The asphalt market, as well as governments, are looking for alternatives with higher sustainability in terms of GHG emission. The usage of alternative sustainable binders, which can (partly) replace the bitumen, contributes to reduce GHG emissions and at the same time broadens the availability of binders. As lignin is a major component (around 25-30%) of lignocellulosic material, which includes terrestrial plants (e.g., trees, bushes, and grass) and agricultural residues (e.g., empty fruit bunches, corn stover, sugarcane bagasse, straw, etc.), it is globally highly available. The chemical structure shows resemblance with the structure of bitumen and could, therefore, be used as an alternative for bitumen in applications like roofing or asphalt. Applications such as the use of lignin in asphalt need both fundamental research as well as practical proof under relevant use conditions. From a fundamental point of view, rheological aspects, as well as mixing, are key criteria. From a practical point of view, behavior in real road conditions is key (how easy can the asphalt be prepared, how easy can it be applied on the road, what is the durability, etc.). The paper will discuss the fundamentals of the use of lignin as bitumen replacement as well as the status of the different demonstration projects in Europe using lignin as a partial bitumen replacement in asphalts and will especially present the results of using Dawn Technology™ lignin as partial replacement of bitumen.Keywords: biorefinery, wood fractionation, lignin, asphalt, bitumen, sustainability
Procedia PDF Downloads 15526 Quality in Healthcare: An Autism-Friendly Hospital Emergency Waiting Room
Authors: Elena Bellini, Daniele Mugnaini, Michele Boschetto
Abstract:
People with an Autistic Spectrum Disorder and an Intellectual Disability who need to attend a Hospital Emergency Waiting Room frequently present high levels of discomfort and challenging behaviors due to stress-related hyperarousal, sensory sensitivity, novelty-anxiety, communication and self-regulation difficulties. Increased agitation and acting out also disturb the diagnostic and therapeutic processes, and the emergency room climate. Architectural design disciplines aimed at reducing distress in hospitals or creating autism-friendly environments are called for to find effective answers to this particular need. A growing number of researchers are considering the physical environment as an important point of intervention for people with autism. It has been shown that providing the right setting can help enhance confidence and self-esteem and can have a profound impact on their health and wellbeing. Environmental psychology has evaluated the perceived quality of care, looking at the design of hospital rooms, paths and circulation, waiting rooms, services and devices. Furthermore, many studies have investigated the influence of the hospital environment on patients, in terms of stress-reduction and therapeutic intervention’ speed, but also on health professionals and their work. Several services around the world are organizing autism-friendly hospital environments which involve the architecture and the specific staff training. In Italy, the association Spes contra spem has promoted and published, in 2013, the ‘Chart of disabled people in the hospital’. It stipulates that disabled people should have equal rights to accessible and high-quality care. There are a few Italian examples of therapeutic programmes for autistic people as the Dama project in Milan and the recent experience of Children and Autism Foundation in Pordenone. Careggi’s Emergency Waiting Room in Florence has been built to satisfy this challenge. This project of research comes from a collaboration between the technical staff of Careggi Hospital, the Center for autism PAMAPI and some architects expert in the sensory environment. The methodology of focus group involved architects, psychologists and professionals through a transdisciplinary research, centered on the links between the spatial characteristics and clinical state of people with ASD. The relationship between architectural space and quality of life is studied to pay maximum attention to users’ needs and to support the medical staff in their work by a specific program of training. The result of this research is a sum of criteria used to design the emergency waiting room, that will be illustrated. A protected room, with a clear space design, maximizes comprehension and predictability. The multisensory environment is thought to help sensory integration and relaxation. Visual communication through Ipad allows an anticipated understanding of medical procedures, and a specific technological system supports requests, choices and self-determination in order to fit sensory stimulation to personal preferences, especially for hypo and hypersensitive people. All these characteristics should ensure a better regulation of the arousal, less behavior problems, improving treatment accessibility, safety, and effectiveness. First results about patient-satisfaction levels will be presented.Keywords: accessibility of care, autism-friendly architecture, personalized therapeutic process, sensory environment
Procedia PDF Downloads 26825 Development of Advanced Virtual Radiation Detection and Measurement Laboratory (AVR-DML) for Nuclear Science and Engineering Students
Authors: Lily Ranjbar, Haori Yang
Abstract:
Online education has been around for several decades, but the importance of online education became evident after the COVID-19 pandemic. Eventhough the online delivery approach works well for knowledge building through delivering content and oversight processes, it has limitations in developing hands-on laboratory skills, especially in the STEM field. During the pandemic, many education institutions faced numerous challenges in delivering lab-based courses, especially in the STEM field. Also, many students worldwide were unable to practice working with lab equipment due to social distancing or the significant cost of highly specialized equipment. The laboratory plays a crucial role in nuclear science and engineering education. It can engage students and improve their learning outcomes. In addition, online education and virtual labs have gained substantial popularity in engineering and science education. Therefore, developing virtual labs is vital for institutions to deliver high-class education to their students, including their online students. The School of Nuclear Science and Engineering (NSE) at Oregon State University, in partnership with SpectralLabs company, has developed an Advanced Virtual Radiation Detection and Measurement Lab (AVR-DML) to offer a fully online Master of Health Physics program. It was essential for us to use a system that could simulate nuclear modules that accurately replicate the underlying physics, the nature of radiation and radiation transport, and the mechanics of the instrumentations used in the real radiation detection lab. It was all accomplished using a Realistic, Adaptive, Interactive Learning System (RAILS). RAILS is a comprehensive software simulation-based learning system for use in training. It is comprised of a web-based learning management system that is located on a central server, as well as a 3D-simulation package that is downloaded locally to user machines. Users will find that the graphics, animations, and sounds in RAILS create a realistic, immersive environment to practice detecting different radiation sources. These features allow students to coexist, interact and engage with a real STEM lab in all its dimensions. It enables them to feel like they are in a real lab environment and to see the same system they would in a lab. Unique interactive interfaces were designed and developed by integrating all the tools and equipment needed to run each lab. These interfaces provide students full functionality for data collection, changing the experimental setup, and live data collection with real-time updates for each experiment. Students can manually do all experimental setups and parameter changes in this lab. Experimental results can then be tracked and analyzed in an oscilloscope, a multi-channel analyzer, or a single-channel analyzer (SCA). The advanced virtual radiation detection and measurement laboratory developed in this study enabled the NSE school to offer a fully online MHP program. This flexibility of course modality helped us to attract more non-traditional students, including international students. It is a valuable educational tool as students can walk around the virtual lab, make mistakes, and learn from them. They have an unlimited amount of time to repeat and engage in experiments. This lab will also help us speed up training in nuclear science and engineering.Keywords: advanced radiation detection and measurement, virtual laboratory, realistic adaptive interactive learning system (rails), online education in stem fields, student engagement, stem online education, stem laboratory, online engineering education
Procedia PDF Downloads 9124 Essential Oils of Polygonum L. Plants Growing in Kazakhstan and Their Antibacterial and Antifungal Activity
Authors: Dmitry Yu. Korulkin, Raissa A. Muzychkina
Abstract:
Bioactive substances of plant origin can be one of the advanced means of solution to the issue of combined therapy to inflammation. The main advantages of medical plants are softness and width of their therapeutic effect on an organism, the absence of side effects and complications even if the used continuously, high tolerability by patients. Moreover, medial plants are often the only and (or) cost-effective sources of natural biologically active substances and medicines. Along with other biologically active groups of chemical compounds, essential oils with wide range of pharmacological effects became very ingrained in medical practice. Essential oil was obtained by the method hydrodistillation air-dry aerial part of Polygonum L. plants using Clevenger apparatus. Qualitative composition of essential oils was analyzed by chromatography-mass-spectrometry method using Agilent 6890N apparatus. The qualitative analysis is based on the comparison of retention time and full mass-spectra with respective data on components of reference oils and pure compounds, if there were any, and with the data of libraries of mass-spectra Wiley 7th edition and NIST 02. The main components of essential oil are for: Polygonum amphibium L. - γ-terpinene, borneol, piperitol, 1,8-cyneole, α-pinene, linalool, terpinolene and sabinene; Polygonum minus Huds. Fl. Angl. – linalool, terpinolene, camphene, borneol, 1,8-cyneole, α-pinene, 4-terpineol and 1-octen-3-ol; Polygonum alpinum All. – camphene, sabinene, 1-octen-3-ol, 4-carene, p- and o-cymol, γ-terpinene, borneol, -terpineol; Polygonum persicaria L. - α-pinene, sabinene, -terpinene, 4-carene, 1,8-cyneole, borneol, 4-terpineol. Antibacterial activity was researched relating to strains of gram-positive bacteria Staphylococcus aureus, Bacillus subtilis, Streptococcus agalacticae, relating to gram-negative strain Escherichia coli and to yeast fungus Сandida albicans using agar diffusion method. The medicines of comparison were gentamicin for bacteria and nystatin for yeast fungus Сandida albicans. It has been shown that Polygonum L. essential oils has moderate antibacterial effect to gram-positive microorganisms and weak antifungal activity to Candida albicans yeast fungus. At the second stage of our researches wound healing properties of ointment form of 3% essential oil was researched on the model of flat dermal wounds. To assess the influence of essential oil on healing processes the model of flat dermal wound. The speed of wound healing on rats of different groups was judged based on assessment the area of a wound from time to time. During research of wound healing properties disturbance of integral in neither group: general condition and behavior of animals, food intake, and excretion. Wound healing action of 3% ointment on base of Polygonum L. essential oil and polyethyleneglycol is comparable with the action of reference substances. As more favorable healing dynamics was observed in the experimental group than in control group, the tested ointment can be deemed more promising for further detailed study as wound healing means.Keywords: antibacterial, antifungal, bioactive substances, essential oils, isolation, Polygonum L.
Procedia PDF Downloads 53523 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 14622 Digital Adoption of Sales Support Tools for Farmers: A Technology Organization Environment Framework Analysis
Authors: Sylvie Michel, François Cocula
Abstract:
Digital agriculture is an approach that exploits information and communication technologies. These encompass data acquisition tools like mobile applications, satellites, sensors, connected devices, and smartphones. Additionally, it involves transfer and storage technologies such as 3G/4G coverage, low-bandwidth terrestrial or satellite networks, and cloud-based systems. Furthermore, embedded or remote processing technologies, including drones and robots for process automation, along with high-speed communication networks accessible through supercomputers, are integral components of this approach. While farm-level adoption studies regarding digital agricultural technologies have emerged in recent years, they remain relatively limited in comparison to other agricultural practices. To bridge this gap, this study delves into understanding farmers' intention to adopt digital tools, employing the technology, organization, environment framework. A qualitative research design encompassed semi-structured interviews, totaling fifteen in number, conducted with key stakeholders both prior to and following the 2020-2021 COVID-19 lockdowns in France. Subsequently, the interview transcripts underwent thorough thematic content analysis, and the data and verbatim were triangulated for validation. A coding process aimed to systematically organize the data, ensuring an orderly and structured classification. Our research extends its contribution by delineating sub-dimensions within each primary dimension. A total of nine sub-dimensions were identified, categorized as follows: perceived usefulness for communication, perceived usefulness for productivity, and perceived ease of use constitute the first dimension; technological resources, financial resources, and human capabilities constitute the second dimension, while market pressure, institutional pressure, and the COVID-19 situation constitute the third dimension. Furthermore, this analysis enriches the TOE framework by incorporating entrepreneurial orientation as a moderating variable. Managerial orientation emerges as a pivotal factor influencing adoption intention, with producers acknowledging the significance of utilizing digital sales support tools to combat "greenwashing" and elevate their overall brand image. Specifically, it illustrates that producers recognize the potential of digital tools in time-saving and streamlining sales processes, leading to heightened productivity. Moreover, it highlights that the intent to adopt digital sales support tools is influenced by a market mimicry effect. Additionally, it demonstrates a negative association between the intent to adopt these tools and the pressure exerted by institutional partners. Finally, this research establishes a positive link between the intent to adopt digital sales support tools and economic fluctuations, notably during the COVID-19 pandemic. The adoption of sales support tools in agriculture is a multifaceted challenge encompassing three dimensions and nine sub-dimensions. The research delves into the adoption of digital farming technologies at the farm level through the TOE framework. This analysis provides significant insights beneficial for policymakers, stakeholders, and farmers. These insights are instrumental in making informed decisions to facilitate a successful digital transition in agriculture, effectively addressing sector-specific challenges.Keywords: adoption, digital agriculture, e-commerce, TOE framework
Procedia PDF Downloads 6121 Cluster Randomized Trial of 'Ready to Learn': An After-School Literacy Program for Children Starting School
Authors: Geraldine Macdonald, Oliver Perra, Nina O’Neill, Laura Neeson, Kathryn Higgins
Abstract:
Background: Despite improvements in recent years, almost one in six children in Northern Ireland (NI) leaves primary school without achieving the expected level in English and Maths. By early adolescence, this ratio is one in five. In 2010-11, around 9000 pupils in NI had failed to achieve the required standard in literacy and numeracy by the time they left full-time education. This paper reports the findings of an experimental evaluation of a programmed designed to improve educational outcomes of a cohort of children starting primary school in areas of high social disadvantage in Northern Ireland. The intervention: ‘Ready to Learn’ comprised two key components: a literacy-rich After School programme (one hour after school, three days per week), and a range of activities and support to promote the engagement of parents with their children’s learning, in school and at home. The intervention was delivered between September 2010 and August 2013. Study aims and objectives: The primary aim was to assess whether, and to what extent, ‘Ready to Learn’ improved the literacy of socially disadvantaged children entering primary schools compared with children in schools without access to the programme. Secondary aims included assessing the programme’s impact on children’s social, emotional and behavioural regulation, and parents’ engagement with their children’s learning. In total, 505 children (almost all) participated in the baseline assessment for the study, with good retention over seven sweeps of data collection. Study design: The intervention was evaluated by means of a cluster randomized trial, with schools as the unit of randomization and analysis. It included a qualitative component designed to examine process and implementation, and to explore the concept of parental engagement. Sixteen schools participated, with nine randomized to the experimental group. As well as outcome data relating to children, 134 semi-structured interviews were conducted with parents over the three years of the study, together with 88 interviews with school staff. Results: Given the children’s ages, not all measures used were direct measures of reading. Findings point to a positive impact of “Ready to Learn” on children’s reading achievement (comprehension and fluency), as assessed by the York Assessment of Reading Comprehension (YARC) and decoding, assessed using the Word Recognition and Phonic Skills (WRaPS3). Effects were not large, but evidence suggests that it is unusual for an after school programme to clearly to demonstrate effects on reading skills. No differences were found on three other measures of literacy-related skills: British Picture Vocabulary Scale (BPVS-II), Naming Speed and Non-word Reading Tests from the Phonological Assessment Battery (PhAB) or Concepts about Print (CAP) – the last due to an age-related ceiling effect). No differences were found between the two groups on measures of social, emotional and behavioural regulation, and due to low levels of participation, it was not possible directly to assess the contribution of the parent component to children’s outcomes. The qualitative data highlighted conflicting concepts of engagement between parents and school staff. Ready to Learn is a promising intervention that merits further support and evaluation.Keywords: after-school, education, literacy, parental engagement
Procedia PDF Downloads 38120 A Basic Concept for Installing Cooling and Heating System Using Seawater Thermal Energy from the West Coast of Korea
Authors: Jun Byung Joon, Seo Seok Hyun, Lee Seo Young
Abstract:
As carbon dioxide emissions increase due to rapid industrialization and reckless development, abnormal climates such as floods and droughts are occurring. In order to respond to such climate change, the use of existing fossil fuels is reduced, and the proportion of eco-friendly renewable energy is gradually increasing. Korea is an energy resource-poor country that depends on imports for 93% of its total energy. As the global energy supply chain instability experienced due to the Russia-Ukraine crisis increases, countries around the world are resetting energy policies to minimize energy dependence and strengthen security. Seawater thermal energy is a renewable energy that replaces the existing air heat energy. It uses the characteristic of having a higher specific heat than air to cool and heat main spaces of buildings to increase heat transfer efficiency and minimize power consumption to generate electricity using fossil fuels, and Carbon dioxide emissions can be minimized. In addition, the effect on the marine environment is very small by using only the temperature characteristics of seawater in a limited way. K-water carried out a demonstration project of supplying cooling and heating energy to spaces such as the central control room and presentation room in the management building by acquiring the heat source of seawater circulated through the power plant's waterway by using the characteristics of the tidal power plant. Compared to the East Sea and the South Sea, the main system was designed in consideration of the large tidal difference, small temperature difference, and low-temperature characteristics, and its performance was verified through operation during the demonstration period. In addition, facility improvements were made for major deficiencies to strengthen monitoring functions, provide user convenience, and improve facility soundness. To spread these achievements, the basic concept was to expand the seawater heating and cooling system with a scale of 200 USRT at the Tidal Culture Center. With the operational experience of the demonstration system, it will be possible to establish an optimal seawater heat cooling and heating system suitable for the characteristics of the west coast ocean. Through this, it is possible to reduce operating costs by KRW 33,31 million per year compared to air heat, and through industry-university-research joint research, it is possible to localize major equipment and materials and develop key element technologies to revitalize the seawater heat business and to advance into overseas markets. The government's efforts are needed to expand the seawater heating and cooling system. Seawater thermal energy utilizes only the thermal energy of infinite seawater. Seawater thermal energy has less impact on the environment than river water thermal energy, except for environmental pollution factors such as bottom dredging, excavation, and sand or stone extraction. Therefore, it is necessary to increase the sense of speed in project promotion by innovatively simplifying unnecessary licensing/permission procedures. In addition, support should be provided to secure business feasibility by dramatically exempting the usage fee of public waters to actively encourage development in the private sector.Keywords: seawater thermal energy, marine energy, tidal power plant, energy consumption
Procedia PDF Downloads 10319 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan
Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad
Abstract:
Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules
Procedia PDF Downloads 10718 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 13517 AI-Enhanced Self-Regulated Learning: Proposing a Comprehensive Model with 'Studium' to Meet a Student-Centric Perspective
Authors: Smita Singh
Abstract:
Objective: The Faculty of Chemistry Education at Humboldt University has developed ‘Studium’, a web application designed to enhance long-term self-regulated learning (SRL) and academic achievement. Leveraging advanced generative AI, ‘Studium’ offers a dynamic and adaptive educational experience tailored to individual learning preferences and languages. The application includes evolving tools for personalized notetaking from preferred sources, customizable presentation capabilities, and AI-assisted guidance from academic documents or textbooks. It also features workflow automation and seamless integration with collaborative platforms like Miro, powered by AI. This study aims to propose a model that combines generative AI with traditional features and customization options, empowering students to create personalized learning environments that effectively address the challenges of SRL. Method: To achieve this, the study included graduate and undergraduate students from diverse subject streams, with 15 participants each from Germany and India, ensuring a diverse educational background. An exploratory design was employed using a speed dating method with enactment, where different scenario sessions were created to allow participants to experience various features of ‘Studium’. The session lasted for 50 minutes, providing an in-depth exploration of the platform's capabilities. Participants interacted with Studium’s features via Zoom conferencing and were then engaged in semi-structured interviews lasting 10-15 minutes to gain deeper insights into the effectiveness of ‘Studium’. Additionally, online questionnaire surveys were conducted before and after the session to gather feedback and evaluate satisfaction with self-regulated learning (SRL) after using ‘Studium’. The response rate of this survey was 100%. Results: The findings of this study indicate that students widely acknowledged the positive impact of ‘Studium’ on their learning experience, particularly its adaptability and intuitive design. They expressed a desire for more tools like ‘Studium’ to support self-regulated learning in the future. The application significantly fostered students' independence in organizing information and planning study workflows, which in turn enhanced their confidence in mastering complex concepts. Additionally, ‘Studium’ promoted strategic decision-making and helped students overcome various learning challenges, reinforcing their self-regulation, organization, and motivation skills. Conclusion: This proposed model emphasizes the need for effective integration of personalized AI tools into active learning and SRL environments. By addressing key research questions, our framework aims to demonstrate how AI-assisted platforms like “Studium” can facilitate deeper understanding, maintain student motivation, and support the achievement of academic goals. Thus, our ideal model for AI-assisted educational platforms provides a strategic approach to enhance student's learning experiences and promote their development as self-regulated learners. This proposed model emphasizes the need for effective integration of personalized AI tools into active learning and SRL environments. By addressing key research questions, our framework aims to demonstrate how AI-assisted platforms like ‘Studium’ can facilitate deeper understanding, maintain student motivation, and support the achievement of academic goals. Thus, our ideal model for AI-assisted educational platforms provides a strategic approach to enhance student's learning experiences and promote their development as self-regulated learners.Keywords: self-regulated learning (SRL), generative AI, AI-assisted educational platforms
Procedia PDF Downloads 3116 Railway Composite Flooring Design: Numerical Simulation and Experimental Studies
Authors: O. Lopez, F. Pedro, A. Tadeu, J. Antonio, A. Coelho
Abstract:
The future of the railway industry lies in the innovation of lighter, more efficient and more sustainable trains. Weight optimizations in railway vehicles allow reducing power consumption and CO₂ emissions, increasing the efficiency of the engines and the maximum speed reached. Additionally, they reduce wear of wheels and rails, increase the space available for passengers, etc. Among the various systems that integrate railway interiors, the flooring system is one which has greater impact both on passenger safety and comfort, as well as on the weight of the interior systems. Due to the high weight saving potential, relative high mechanical resistance, good acoustic and thermal performance, ease of modular design, cost-effectiveness and long life, the use of new sustainable composite materials and panels provide the latest innovations for competitive solutions in the development of flooring systems. However, one of the main drawbacks of the flooring systems is their relatively poor resistance to point loads. Point loads in railway interiors can be caused by passengers or by components fixed to the flooring system, such as seats and restraint systems, handrails, etc. In this way, they can originate higher fatigue solicitations under service loads or zones with high stress concentrations under exceptional loads (higher longitudinal, transverse and vertical accelerations), thus reducing its useful life. Therefore, to verify all the mechanical and functional requirements of the flooring systems, many physical prototypes would be created during the design phase, with all of the high costs associated with it. Nowadays, the use of virtual prototyping methods by computer-aided design (CAD) and computer-aided engineering (CAE) softwares allow validating a product before committing to making physical test prototypes. The scope of this work was to current computer tools and integrate the processes of innovation, development, and manufacturing to reduce the time from design to finished product and optimise the development of the product for higher levels of performance and reliability. In this case, the mechanical response of several sandwich panels with different cores, polystyrene foams, and composite corks, were assessed, to optimise the weight and the mechanical performance of a flooring solution for railways. Sandwich panels with aluminum face sheets were tested to characterise its mechanical performance and determine the polystyrene foam and cork properties when used as inner cores. Then, a railway flooring solution was fully modelled (including the elastomer pads to provide the required vibration isolation from the car body) and perform structural simulations using FEM analysis to comply all the technical product specifications for the supply of a flooring system. Zones with high stress concentrations are studied and tested. The influence of vibration modes on the comfort level and stability is discussed. The information obtained with the computer tools was then completed with several mechanical tests performed on some solutions, and on specific components. The results of the numerical simulations and experimental campaign carried out are presented in this paper. This research work was performed as part of the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through COMPETE 2020.Keywords: cork agglomerate core, mechanical performance, numerical simulation, railway flooring system
Procedia PDF Downloads 18015 Optimizing the Residential Design Process Using Automated Technologies and AI
Authors: Milena Nanova, Martin Georgiev, Radul Shishkov, Damyan Damov
Abstract:
Modern residential architecture is increasingly influenced by rapid urbanization, technological advancements, and growing investor expectations. The integration of AI and digital tools such as CAD and BIM (Building Information Modelling) are transforming the design process by improving efficiency, accuracy, and speed. However, urban development faces challenges, including the high competition for viable sites and the time-consuming nature of traditional investment feasibility studies and architectural planning. Finding and analysing suitable sites for residential development is complicated by intense competition and rising investor demands. Investors require quick assessments of property potential to avoid missing opportunities, while traditional architectural design processes are relying on experience of the team and can be time consuming, adding pressure to make fast, effective decisions. The widespread use of CAD tools has sped up the drafting process, enhancing both accuracy and efficiency. Digital tools allow designers to manipulate drawings quickly, reducing the time spent on revisions. BIM further advances this by enabling native 3D modelling, where changes to a design in one view are automatically reflected in all others, minimizing errors and saving time. AI is becoming an integral part of architectural design software. While AI is currently being incorporated into existing programs like AutoCAD, Revit, and ArchiCAD, its full potential is reached in parametric modelling. In this process, designers define parameters (e.g., building size, layout, and materials), and the software generates multiple design variations based on those inputs. This method accelerates the design process by automating decisions and enabling quick generation of alternative solutions. The study utilizes generative design, a specific application of parametric modelling which uses AI to explore a wide range of design possibilities based on predefined criteria. It optimizes designs through iterations, testing many variations to find the best solutions. This process is particularly beneficial in the early stages of design, where multiple options are explored before refining the best ones. AI’s ability to handle complex mathematical tasks allows it to generate unconventional yet effective designs that a human designer might overlook. Residential architecture, with its anticipated and typical layouts and modular nature, is especially suitable for generative design. The relationships between rooms and the overall organization of apartment units follow logical patterns, making it an ideal candidate for parametric modelling. Using these tools, architects can quickly explore various apartment configurations, considering factors like apartment sizes, types, and circulation patterns, and identify the most efficient layout for a given site. Parametric modelling and generative design offer significant benefits to residential architecture by streamlining the design process, enabling faster decision-making, and optimizing building layouts. These technologies allow architects and developers to analyse numerous design possibilities, improving outcomes while responding to the challenges of urban development. By integrating AI-driven generative design, the architecture industry can enhance creativity, efficiency, and adaptability in residential projects.Keywords: architectural design, residential buildings, generative design, parametric models, workflow optimization
Procedia PDF Downloads 214 Electromyographic Analysis of Biceps Brachii during Golf Swing and Review of Its Impact on Return to Play Following Tendon Surgery
Authors: Amin Masoumiganjgah, Luke Salmon, Julianne Burnton, Fahimeh Bagheri, Gavin Lenton, S. L. Ezekial Tan
Abstract:
Introduction: The incidence of proximal biceps tenodesis and acute distal biceps repair is increasing, and rehabilitation protocols following both are variable. Golf is a popular sport within Australia, and the Gold Coast has become a Mecca for golfers, with more courses per capita than anywhere else in the world. Currently, there are no clear guidelines regarding return to golf play following biceps procedures. The aim of this study was to determine biceps brachii activation during the golf swing through electromyographic analysis, and subsequently, aid in rehabilitation guidelines and return to golf following tenodesis and repair. Methods: Subjects were amateur golfers with no previous upper limb surgery. Surface electromyography (EMG) and high-speed video recording were used to analyse activation of the left and right biceps brachii and the anterior deltoid during the golf swing. Each participant’s maximum voluntary contraction (MVC) was recorded, and they were then required to hit a golf ball aiming for specific distances of 2, 50, 100 and 150 metres at a driving range. Noraxon myoResearch and Matlab were used for data analysis. Mean % MVC was calculated for leading and trailing arms during the full swing and its’ 4 phases: back-swing, acceleration, early follow-through and late follow-through. Results: 12 golfers (2 female and 10 male), participated in the study. Median age was 27 (25 – 38), with all being right handed. Over all distances, the mean activation of the short and long head of biceps brachii was < 10% through the full swing. When breaking down the 50, 100 and 150m swing into phases, mean MVC activation was lowest in backswing (5.1%), followed by acceleration (9.7%), early follow-through (9.2%), and late follow-through (21.4%). There was more variation and slightly higher activation in the right biceps (trailing arm) in backswing, acceleration, and early follow-through; with higher activation in the leading arm in late follow-through (25.4% leading, 17.3% trailing). 2m putts resulted in low MVC values (3.1% ) with little variation across swing phases. There was considerable individual variation in results – one tense subject averaged 11.0% biceps MVC through the 2m putting stroke and others recorded peak mean MVC biceps activations of 68.9% at 50m, 101.3% at 100m, and 111.3% at 150m. Discussion: Previous studies have investigated the role of rotator cuff, spine, and hip muscles during the golf swing however, to our knowledge, this is the first study that investigates the activation of biceps brachii. Many rehabilitation programs following a biceps tenodesis or repair allow active range against gravity and restrict strengthening exercises until 6 weeks, and this does not appear to be associated with any adverse outcome. Previous studies demonstrate a range of < 10% MVC is similar to the unloaded biceps brachii during walking(1), active elbow flexion with the hand positioned either in pronation or supination will produce MVC < 20% throughout range(2) and elbow flexion with a 4kg dumbbell can produce mean MVC’s of around 40%(3). Our study demonstrates that increasing activation is associated with the leading arm, increasing shot distance and the late follow-through phase. Although the cohort mean MVC of the biceps brachii is <10% through the full swing, variability is high and biceps activation reach peak mean MVC’s of over 100% in different swing phases for some individuals. Given these EMG values, caution is advised when advising patients post biceps procedures to return to long distance golf shots, particularly when the leading arm is involved. Even though it would appear that putting would be as safe as having an unloaded hand out of a sling following biceps procedures, the variability of activation patterns across different golfers would lead us to caution against accelerated golf rehabilitation in those who may be particularly tense golfers. The 50m short iron shot was too long to consider as a chip shot and more work can be done in this area to determine the safety of chipping.Keywords: electromyographic analysis, biceps brachii rupture, golf swing, tendon surgery
Procedia PDF Downloads 8113 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 9712 Internet of Assets: A Blockchain-Inspired Academic Program
Authors: Benjamin Arazi
Abstract:
Blockchain is the technology behind cryptocurrencies like Bitcoin. It revolutionizes the meaning of trust in the sense of offering total reliability without relying on any central entity that controls or supervises the system. The Wall Street Journal states: “Blockchain Marks the Next Step in the Internet’s Evolution”. Blockchain was listed as #1 in Linkedin – The Learning Blog “most in-demand hard skills needed in 2020”. As stated there: “Blockchain’s novel way to store, validate, authorize, and move data across the internet has evolved to securely store and send any digital asset”. GSMA, a leading Telco organization of mobile communications operators, declared that “Blockchain has the potential to be for value what the Internet has been for information”. Motivated by these seminal observations, this paper presents the foundations of a Blockchain-based “Internet of Assets” academic program that joins under one roof leading application areas that are characterized by the transfer of assets over communication lines. Two such areas, which are pillars of our economy, are Fintech – Financial Technology and mobile communications services. The next application in line is Healthcare. These challenges are met based on available extensive professional literature. Blockchain-based assets communication is based on extending the principle of Bitcoin, starting with the basic question: If digital money that travels across the universe can ‘prove its own validity’, can this principle be applied to digital content. A groundbreaking positive answer here led to the concept of “smart contract” and consequently to DLT - Distributed Ledger Technology, where the word ‘distributed’ relates to the non-existence of reliable central entities or trusted third parties. The terms Blockchain and DLT are frequently used interchangeably in various application areas. The World Bank Group compiled comprehensive reports, analyzing the contribution of DLT/Blockchain to Fintech. The European Central Bank and Bank of Japan are engaged in Project Stella, “Balancing confidentiality and auditability in a distributed ledger environment”. 130 DLT/Blockchain focused Fintech startups are now operating in Switzerland. Blockchain impact on mobile communications services is treated in detail by leading organizations. The TM Forum is a global industry association in the telecom industry, with over 850 member companies, mainly mobile operators, that generate US$2 trillion in revenue and serve five billion customers across 180 countries. From their perspective: “Blockchain is considered one of the digital economy’s most disruptive technologies”. Samples of Blockchain contributions to Fintech (taken from a World Bank document): Decentralization and disintermediation; Greater transparency and easier auditability; Automation & programmability; Immutability & verifiability; Gains in speed and efficiency; Cost reductions; Enhanced cyber security resilience. Samples of Blockchain contributions to the Telco industry. Establishing identity verification; Record of transactions for easy cost settlement; Automatic triggering of roaming contract which enables near-instantaneous charging and reduction in roaming fraud; Decentralized roaming agreements; Settling accounts per costs incurred in accordance with agreement tariffs. This clearly demonstrates an academic education structure where fundamental technologies are studied in classes together with these two application areas. Advanced courses, treating specific implementations then follow separately. All are under the roof of “Internet of Assets”.Keywords: blockchain, education, financial technology, mobile telecommunications services
Procedia PDF Downloads 18011 Development of Portable Hybrid Renewable Energy System for Sustainable Electricity Supply to Rural Communities in Nigeria
Authors: Abdulkarim Nasir, Alhassan T. Yahaya, Hauwa T. Abdulkarim, Abdussalam El-Suleiman, Yakubu K. Abubakar
Abstract:
The need for sustainable and reliable electricity supply in rural communities of Nigeria remains a pressing issue, given the country's vast energy deficit and the significant number of inhabitants lacking access to electricity. This research focuses on the development of a portable hybrid renewable energy system designed to provide a sustainable and efficient electricity supply to these underserved regions. The proposed system integrates multiple renewable energy sources, specifically solar and wind, to harness the abundant natural resources available in Nigeria. The design and development process involves the selection and optimization of components such as photovoltaic panels, wind turbines, energy storage units (batteries), and power management systems. These components are chosen based on their suitability for rural environments, cost-effectiveness, and ease of maintenance. The hybrid system is designed to be portable, allowing for easy transportation and deployment in remote locations with limited infrastructure. Key to the system's effectiveness is its hybrid nature, which ensures continuous power supply by compensating for the intermittent nature of individual renewable sources. Solar energy is harnessed during the day, while wind energy is captured whenever wind conditions are favourable, thus ensuring a more stable and reliable energy output. Energy storage units are critical in this setup, storing excess energy generated during peak production times and supplying power during periods of low renewable generation. These studies include assessing the solar irradiance, wind speed patterns, and energy consumption needs of rural communities. The simulation results inform the optimization of the system's design to maximize energy efficiency and reliability. This paper presents the development and evaluation of a 4 kW standalone hybrid system combining wind and solar power. The portable device measures approximately 8 feet 5 inches in width, 8 inches 4 inches in depth, and around 38 feet in height. It includes four solar panels with a capacity of 120 watts each, a 1.5 kW wind turbine, a solar charge controller, remote power storage, batteries, and battery control mechanisms. Designed to operate independently of the grid, this hybrid device offers versatility for use in highways and various other applications. It also presents a summary and characterization of the device, along with photovoltaic data collected in Nigeria during the month of April. The construction plan for the hybrid energy tower is outlined, which involves combining a vertical-axis wind turbine with solar panels to harness both wind and solar energy. Positioned between the roadway divider and automobiles, the tower takes advantage of the air velocity generated by passing vehicles. The solar panels are strategically mounted to deflect air toward the turbine while generating energy. Generators and gear systems attached to the turbine shaft enable power generation, offering a portable solution to energy challenges in Nigerian communities. The study also addresses the economic feasibility of the system, considering the initial investment costs, maintenance, and potential savings from reduced fossil fuel use. A comparative analysis with traditional energy supply methods highlights the long-term benefits and sustainability of the hybrid system.Keywords: renewable energy, solar panel, wind turbine, hybrid system, generator
Procedia PDF Downloads 4410 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures
Authors: James Forren
Abstract:
This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.Keywords: augmented reality, cementitious composites, computational form finding, textile structures
Procedia PDF Downloads 1769 Optical Coherence Tomography in Differentiation of Acute and Non-Healing Wounds
Authors: Ananya Barui, Provas Banerjee, Jyotirmoy Chatterjee
Abstract:
Application of optical technology in medicine and biology has a long track-record. In this endeavor, OCT is able to attract both engineers and biologists to work together in the field of photonics for establishing a striking non-invasive imaging technology. In contrast to other in vivo imaging modalities like Raman imaging, confocal imaging, two-photon microscopy etc. which can perform in vivo imaging upto 100-200 micron depth due to limitation in numerical aperture or scattering, however, OCT can achieve high-resolution imaging upto few millimeters of tissue structures depending on their refractive index in different anatomical location. This tomographic system depends on interference of two light waves in an interferometer to produce a depth profile of specimen. In wound healing, frequent collection of biopsies for follow-up of repair process could be avoided by such imaging technique. Real time skin OCT (the optical biopsy) has efficacy in deeper and faster illumination of cutaneou tissue to acquire high resolution cross sectional images of their internal micro-structure. Swept Source-OCT (SS-OCT), a novel imaging technique, can generate high-speed depth profile (~ 2 mm) of wound at a sweeping rate of laser with micron level resolution and optimum coherent length of 5-6 mm. Normally multi-layered skin tissue depicts different optical properties along with variation in thickness, refractive index and composition (i.e. keratine layer, water, fat etc.) according to their anatomical location. For instance, stratum corneum, the upper-most and relatively dehydrated layer of epidermis reflects more light and produces more lucid and a sharp demarcation line with rest of the hydrated epidermal region. During wound healing or regeneration, optical properties of cutaneous tissue continuously altered with maturation of wound bed. More mature and less hydrated tissue component reflects more light and becomes visible as a brighter area in comparison to immature region which content higher amount water or fat that depicts as a darker area in OCT image. Non-healing wound possess prolonged inflammation and inhibits nascent proliferative stage. Accumulation of necrotic tissues also prevents the repair of non-healing wounds. Due to high resolution and potentiality to reflect the compositional aspects of tissues in terms of their optical properties, this tomographic method may facilitate in differentiating non-healing and acute wounds in addition to clinical observations. Non-invasive OCT offers better insight regarding specific biological status of tissue in health and pathological conditions, OCT images could be associated with histo-pathological ‘gold standard’. This correlated SS-OCT and microscopic evaluation of the wound edges can provide information regarding progressive healing and maturation of the epithelial components. In the context of searching analogy between two different imaging modalities, their relative performances in imaging of healing bed were estimated for probing an alternative approach. Present study validated utility of SS-OCT in revealing micro-anatomic structure in the healing bed with newer information. Exploring precise correspondence of OCT images features with histo-chemical findings related to epithelial integrity of the regenerated tissue could have great implication. It could establish the ‘optical biopsy’ as a potent non-invasive diagnostic tool for cutaneous pathology.Keywords: histo-pathology, non invasive imaging, OCT, wound healing
Procedia PDF Downloads 2798 Machine Learning Based Digitalization of Validated Traditional Cognitive Tests and Their Integration to Multi-User Digital Support System for Alzheimer’s Patients
Authors: Ramazan Bakir, Gizem Kayar
Abstract:
It is known that Alzheimer and Dementia are the two most common types of Neurodegenerative diseases and their visibility is getting accelerated for the last couple of years. As the population sees older ages all over the world, researchers expect to see the rate of this acceleration much higher. However, unfortunately, there is no known pharmacological cure for both, although some help to reduce the rate of cognitive decline speed. This is why we encounter with non-pharmacological treatment and tracking methods more for the last five years. Many researchers, including well-known associations and hospitals, lean towards using non-pharmacological methods to support cognitive function and improve the patient’s life quality. As the dementia symptoms related to mind, learning, memory, speaking, problem-solving, social abilities and daily activities gradually worsen over the years, many researchers know that cognitive support should start from the very beginning of the symptoms in order to slow down the decline. At this point, life of a patient and caregiver can be improved with some daily activities and applications. These activities include but not limited to basic word puzzles, daily cleaning activities, taking notes. Later, these activities and their results should be observed carefully and it is only possible during patient/caregiver and M.D. in-person meetings in hospitals. These meetings can be quite time-consuming, exhausting and financially ineffective for hospitals, medical doctors, caregivers and especially for patients. On the other hand, digital support systems are showing positive results for all stakeholders of healthcare systems. This can be observed in countries that started Telemedicine systems. The biggest potential of our system is setting the inter-user communication up in the best possible way. In our project, we propose Machine Learning based digitalization of validated traditional cognitive tests (e.g. MOCA, Afazi, left-right hemisphere), their analyses for high-quality follow-up and communication systems for all stakeholders. R. Bakir and G. Kayar are with Gefeasoft, Inc, R&D – Software Development and Health Technologies company. Emails: ramazan, gizem @ gefeasoft.com This platform has a high potential not only for patient tracking but also for making all stakeholders feel safe through all stages. As the registered hospitals assign corresponding medical doctors to the system, these MDs are able to register their own patients and assign special tasks for each patient. With our integrated machine learning support, MDs are able to track the failure and success rates of each patient and also see general averages among similarly progressed patients. In addition, our platform also supports multi-player technology which helps patients play with their caregivers so that they feel much safer at any point they are uncomfortable. By also gamifying the daily household activities, the patients will be able to repeat their social tasks and we will provide non-pharmacological reminiscence therapy (RT – life review therapy). All collected data will be mined by our data scientists and analyzed meaningfully. In addition, we will also add gamification modules for caregivers based on Naomi Feil’s Validation Therapy. Both are behaving positively to the patient and keeping yourself mentally healthy is important for caregivers. We aim to provide a therapy system based on gamification for them, too. When this project accomplishes all the above-written tasks, patients will have the chance to do many tasks at home remotely and MDs will be able to follow them up very effectively. We propose a complete platform and the whole project is both time and cost-effective for supporting all stakeholders.Keywords: alzheimer’s, dementia, cognitive functionality, cognitive tests, serious games, machine learning, artificial intelligence, digitalization, non-pharmacological, data analysis, telemedicine, e-health, health-tech, gamification
Procedia PDF Downloads 1387 Restoring Total Form and Function in Patients with Lower Limb Bony Defects Utilizing Patient-Specific Fused Deposition Modelling- A Neoteric Multidisciplinary Reconstructive Approach
Authors: Divya SY. Ang, Mark B. Tan, Nicholas EM. Yeo, Siti RB. Sudirman, Khong Yik Chew
Abstract:
Introduction: The importance of the amalgamation of technological and engineering advances with surgical principles of reconstruction cannot be overemphasized. With earlier detection of cancer, consequences of high-speed living and neglect, like traumatic injuries and infection, resulting in increasingly younger patients with bone defects. This may result in malformations and suboptimal function that is more noticeable and palpable in the younger, active demographic. Our team proposes a technique that encapsulates a mesh of multidisciplinary effort, tissue engineering and reconstructive principles. Methods/Materials: Our patient was a young competitive footballer in his early 30s who was diagnosed with submandibular adenoid cystic carcinoma with bony involvement. He was thus counselled for a right hemi mandibulectomy, the floor of mouth resection, right selective neck dissection, tracheostomy, and free fibular flap reconstruction of his mandible and required post-operative radiotherapy. Being young and in his prime sportsman years, he was unable to accept the morbidities associated with using his fibula to reconstruct his mandible despite it being the gold standard reconstructive option. The fibula is an ideal vascularized bone flap because it’s reliable and easily shaped with relatively minimal impact on functional outcomes. The fibula contributes to 30% of weightbearing and is the attachment for the lateral compartment muscles; it is stronger in footballers concerning lateral bending. When harvesting the fibula, the distal 6-8cm and up to 10% of the total length is preserved to maintain the ankle’s stability, thus, minimizing the impact on daily activities. There are studies that have noted gait variability post-operatively. Therefore, returning to a premorbid competitive level may be doubtful. To improve his functional outcomes, the decision was made to try and restore the fibula's form and function. Using the concept of Fused Deposition Modelling (FDM), our team comprising of Plastics, Otolaryngology, Orthopedics and Radiology, worked with Osteopore to design a 3D bioresorbable implant to regenerate the fibula defect (14.5cm). Bone marrow was harvested via reaming the contralateral hip prior to the wide resection. 30mls of his blood was obtained for extracting platelet rich plasma. These were packed into the Osteopore 3D-printed bone scaffold. This was then secured into the fibula defect with titanium plates and screws. The flexor hallucis longus and soleus were anchored along the construct and intraosseous membrane, done in a single setting. Results: He was reviewed closely as an outpatient over 10 months post operatively. He reported no discernable loss or difference in ankle function. He is satisfied and back in training and our team has video and photographs that substantiate his progress. Conclusion: FDM allows regeneration of long bone defects. However, we aimed to also restore his eversion and inversion that is imperative for footballers and hence reattached his previously dissected muscles along the length of the Osteopore implant. We believe that the reattachment of the muscle stabilizes not only the construct but allows optimum muscle tensioning when moving his ankle. This is a simple but effective technique in restoring complete function and form in a young patient whose minute muscle control is imperative to life.Keywords: fused deposition modelling, functional reconstruction, lower limb bony defects, regenerative surgery, 3D printing, tissue engineering
Procedia PDF Downloads 73