Search results for: product characteristics
367 Collaborative Program Student Community Service as a New Approach for Development in Rural Area in Case of Western Java
Authors: Brian Yulianto, Syachrial, Saeful Aziz, Anggita Clara Shinta
Abstract:
Indonesia, with a population of about two hundred and fifty million people in quantity, indicates the outstanding wealth of human resources. Hundreds of millions of the population scattered in various communities in various regions in Indonesia with the different characteristics of economic, social and unique culture. Broadly speaking, the community in Indonesia is divided into two classes, namely urban communities and rural communities. The rural communities characterized by low potential and management of natural and human resources, limited access of development, and lack of social and economic infrastructure, and scattered and isolated population. West Java is one of the provinces with the largest population in Indonesia. Based on data from the Central Bureau of Statistics in 2015 the number of population in West Java reached 46.7096 million souls spread over 18 districts and 9 cities. The big difference in geographical and social conditions of people in West Java from one region to another, especially the south to the north causing the gap is high. It is closely related to the flow of investment to promote the area. Poverty and underdevelopment are the classic problems that occur on a massive scale in the region as the effects of inequity in development. South Cianjur and Tasikmalaya area South became one of the portraits area where the existing potential has not been capable of prospering society. Tri Dharma College not only define the College as a pioneer implementation of education and research to improve the quality of human resources but also demanded to be a pioneer in the development through the concept of public service. Bandung Institute of Technology as one of the institutions of higher education to implement community service system through collaborative community work program "one of the university community" as one approach to developing villages. The program is based Community Service, where students are not only required to be able to take part in community service, but also able to develop a community development strategy that is comprehensive and integrity in cooperation with government agencies and non-government related as a real form of effort alignment potential, position and role from various parties. Areas of western Java in particular have high poverty rates and disparity. On the other hand, there are three fundamental pillars in the development of rural communities, namely economic development, community development, and the integrated infrastructure development. These pillars require the commitment of all components of community, including the students and colleges for upholding success. College’s community program is one of the approaches in the development of rural communities. ITB is committed to implement as one form of student community service as community-college programs that integrate all elements of the community which is called Kuliah Kerja Nyata-Thematic.Keywords: development in rural area, collaborative, student community service, Kuliah Kerja Nyata-Thematic ITB
Procedia PDF Downloads 222366 Current Applications of Artificial Intelligence (AI) in Chest Radiology
Authors: Angelis P. Barlampas
Abstract:
Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses
Procedia PDF Downloads 72365 Concentrations of Leptin, C-Peptide and Insulin in Cord Blood as Fetal Origins of Insulin Resistance and Their Effect on the Birth Weight of the Newborn
Authors: R. P. Hewawasam, M. H. A. D. de Silva, M. A. G. Iresha
Abstract:
Obesity is associated with an increased risk of developing insulin resistance. Insulin resistance often progresses to type-2 diabetes mellitus and is linked to a wide variety of other pathophysiological features including hypertension, hyperlipidemia, atherosclerosis (metabolic syndrome) and polycystic ovarian syndrome. Macrosomia is common in infants born to not only women with gestational diabetes mellitus but also non-diabetic obese women. During the past two decades, obesity in children and adolescents has risen significantly in Asian populations including Sri Lanka. There is increasing evidence to believe that infants who are born large for gestational age (LGA) are more likely to be obese in childhood. It is also established from previous studies that Asian populations have higher percentage body fat at a lower body mass index compared to Caucasians. High leptin levels in cord blood have been reported to correlate with fetal adiposity at birth. Previous studies have also shown that cord blood C-peptide and insulin levels are significantly and positively correlated with birth weight. Therefore, the objective of this preliminary study was to determine the relationship between parameters of fetal insulin resistance such as leptin, C-peptide and insulin and the birth weight of the newborn in a study population in Southern Sri Lanka. Umbilical cord blood was collected from 90 newborns and the concentration of insulin, leptin, and C-peptide were measured by ELISA technique. Birth weight, length, occipital frontal, chest, hip and calf circumferences of newborns were measured and characteristics of the mother such as age, height, weight before pregnancy and weight gain were collected. The relationship between insulin, leptin, C-peptide, and anthropometrics were assessed by Pearson’s correlation while the Mann-Whitney U test was used to assess the differences in cord blood leptin, C-peptide, and insulin levels between groups. A significant difference (p < 0.001) was observed between the insulin levels of infants born LGA (18.73 ± 0.64 µlU/ml) and AGA (13.08 ± 0.43 µlU/ml). Consistently, A significant increase in concentration (p < 0.001) was observed in C-peptide levels of infants born LGA (9.32 ± 0.77 ng/ml) compared to AGA (5.44 ± 0.19 ng/ml). Cord blood leptin concentration of LGA infants (12.67 ng/mL ± 1.62) was significantly higher (p < 0.001) compared to the AGA infants (7.10 ng/mL ± 0.97). Significant positive correlations (p < 0.05) were observed among cord leptin levels and the birth weight, pre-pregnancy maternal weight and BMI between the infants of AGA and LGA. Consistently, a significant positive correlation (p < 0.05) was observed between the birth weight and the C peptide concentration. Significantly high concentrations of leptin, C-peptide and insulin levels in the cord blood of LGA infants suggest that they may be involved in regulating fetal growth. Although previous studies suggest comparatively high levels of body fat in the Asian population, values obtained in this study are not significantly different from values previously reported from Caucasian populations. According to this preliminary study, maternal pre-pregnancy BMI and weight may contribute as significant indicators of cord blood parameters of insulin resistance and possibly the birth weight of the newborn.Keywords: large for gestational age, leptin, C-peptide, insulin
Procedia PDF Downloads 157364 Methodology for the Determination of Triterpenic Compounds in Apple Extracts
Authors: Mindaugas Liaudanskas, Darius Kviklys, Kristina Zymonė, Raimondas Raudonis, Jonas Viškelis, Norbertas Uselis, Pranas Viškelis, Valdimaras Janulis
Abstract:
Apples are among the most commonly consumed fruits in the world. Based on data from the year 2014, approximately 84.63 million tons of apples are grown per annum. Apples are widely used in food industry to produce various products and drinks (juice, wine, and cider); they are also used unprocessed. Apples in human diet are an important source of different groups of biological active compounds that can positively contribute to the prevention of various diseases. They are a source of various biologically active substances – especially vitamins, organic acids, micro- and macro-elements, pectins, and phenolic, triterpenic, and other compounds. Triterpenic compounds, which are characterized by versatile biological activity, are the biologically active compounds found in apples that are among the most promising and most significant for human health. A specific analytical procedure including sample preparation and High Performance Liquid Chromatography (HPLC) analysis was developed, optimized, and validated for the detection of triterpenic compounds in the samples of different apples, their peels, and flesh from widespread apple cultivars 'Aldas', 'Auksis', 'Connel Red', 'Ligol', 'Lodel', and 'Rajka' grown in Lithuanian climatic conditions. The conditions for triterpenic compound extraction were optimized: the solvent of the extraction was 100% (v/v) acetone, and the extraction was performed in an ultrasound bath for 10 min. Isocratic elution (the eluents ratio being 88% (solvent A) and 12% (solvent B)) for a rapid separation of triterpenic compounds was performed. The validation of the methodology was performed on the basis of the ICH recommendations. The following characteristics of validation were evaluated: the selectivity of the method (specificity), precision, the detection and quantitation limits of the analytes, and linearity. The obtained parameters values confirm suitability of methodology to perform analysis of triterpenic compounds. Using the optimised and validated HPLC technique, four triterpenic compounds were separated and identified, and their specificity was confirmed. These compounds were corosolic acid, betulinic acid, oleanolic acid, and ursolic acid. Ursolic acid was the dominant compound in all the tested apple samples. The detected amount of betulinic acid was the lowest of all the identified triterpenic compounds. The greatest amounts of triterpenic compounds were detected in whole apple and apple peel samples of the 'Lodel' cultivar, and thus apples and apple extracts of this cultivar are potentially valuable for use in medical practice, for the prevention of various diseases, for adjunct therapy, for the isolation of individual compounds with a specific biological effect, and for the development and production of dietary supplements and functional food enriched in biologically active compounds. Acknowledgements. This work was supported by a grant from the Research Council of Lithuania, project No. MIP-17-8.Keywords: apples, HPLC, triterpenic compounds, validation
Procedia PDF Downloads 173363 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 90362 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 110361 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model
Authors: Danjuma Bawa
Abstract:
This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics
Procedia PDF Downloads 147360 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer
Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu
Abstract:
Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer
Procedia PDF Downloads 60359 Characterization of Extra Virgin Olive Oil from Olive Cultivars Grown in Pothwar, Pakistan
Authors: Abida Mariam, Anwaar Ahmed, Asif Ahmad, Muhammad Sheeraz Ahmad, Muhammad Akram Khan, Muhammad Mazahir
Abstract:
The plant olive (Olea europaea L.) is known for its commercial significance due to nutritional and health benefits. Pakistan is ranked 4th among countries who import olive oil whereas, 70% of edible oil is imported to fulfil the needs of the country. There exists great potential for Olea europaea cultivation in Pakistan. The popularity and cultivation of olive fruit has increased in recent past due to its high socio-economic and health significance. There exist almost negligible data on the chemical composition of extra virgin olive oil extracted from cultivars grown in Pothwar, an area with arid climate conducive for growth of olive trees. Keeping in view these factors a study has been conducted to characterize the olive oil extracted from olive cultivars collected from Pothwar regions of Pakistan for their nutritional potential and value addition. Ten olive cultivars (Gemlik, Coratina, Sevillano, Manzanilla, Leccino, Koroneiki, Frantoio, Arbiquina, Earlik and Ottobratica) were collected from Barani Agriculture Research Institute, Chakwal. Extra Virgin Olive Oil (EVOO) was extracted by cold pressing and centrifuging of olive fruits. The highest amount of oil was yielded in Coratina (23.9%) followed by Frantoio (23.7%), Koroneiki (22.8%), Sevillano (22%), Ottobratica (22%), Leccino (20.5%), Arbiquina (19.2%), Manzanilla (17.2%), Earlik (14.4%) and Gemllik (13.1%). The extracted virgin olive oil was studied for various physico- chemical properties and fatty acid profile. The Physical and chemical properties i.e., characteristic odor and taste, light yellow color with no foreign matter, insoluble impurities (≤0.08), fee fatty acid (0.1 to 0.8), acidity (0.5 to 1.6 mg/g acid), peroxide value (1.5 to 5.2 meqO2/kg), Iodine value (82 to 90), saponification value (186 to 192 mg/g) and unsaponifiable matter (4 to 8g/kg), ultraviolet spectrophotometric analysis (k232 and k270), showed values in the acceptable range, established by PSQCA and IOOC set for extra virgin olive oil. Olive oil was analyzed by Near Infra-Red spectrophotometry (NIR) for fatty acids sin olive oils which were found as: palmitic, palmitoleic, stearic, oleic, linoleic and alpha-linolenic. Major fatty acid was Oleic acid in the highest percentage ranging from (55 to 66.1%), followed by linoleic (10.4 to 20.4%), palmitic (13.8 to 19.5%), stearic (3.9 to 4.4%), palmitoleic (0.3 to 1.7%) and alpha-linolenic (0.9 to 1.7%). The results were significant with differences in parameters analyzed for all ten cultivars which confirm that genetic factors are important contributors in the physico-chemical characteristics of oil. The olive oil showed superior physical and chemical properties and recommended as one of the healthiest forms of edible oil. This study will help consumers to be more aware of and make better choices of healthy oils available locally thus contributing towards their better health.Keywords: characterization, extra virgin olive oil, oil yield, fatty acids
Procedia PDF Downloads 97358 Risk Assessment Tools Applied to Deep Vein Thrombosis Patients Treated with Warfarin
Authors: Kylie Mueller, Nijole Bernaitis, Shailendra Anoopkumar-Dukie
Abstract:
Background: Vitamin K antagonists particularly warfarin is the most frequently used oral medication for deep vein thrombosis (DVT) treatment and prophylaxis. Time in therapeutic range (TITR) of the international normalised ratio (INR) is widely accepted as a measure to assess the quality of warfarin therapy. Multiple factors can affect warfarin control and the subsequent adverse outcomes including thromboembolic and bleeding events. Predictor models have been developed to assess potential contributing factors and measure the individual risk of these adverse events. These predictive models have been validated in atrial fibrillation (AF) patients, however, there is a lack of literature on whether these can be successfully applied to other warfarin users including DVT patients. Therefore, the aim of the study was to assess the ability of these risk models (HAS BLED and CHADS2) to predict haemorrhagic and ischaemic incidences in DVT patients treated with warfarin. Methods: A retrospective analysis of DVT patients receiving warfarin management by a private pathology clinic was conducted. Data was collected from November 2007 to September 2014 and included demographics, medical and drug history, INR targets and test results. Patients receiving continuous warfarin therapy with an INR reference range between 2.0 and 3.0 were included in the study with mean TITR calculated using the Rosendaal method. Bleeding and thromboembolic events were recorded and reported as incidences per patient. The haemorrhagic risk model HAS BLED and ischaemic risk model CHADS2 were applied to the data. Patients were then stratified into either the low, moderate, or high-risk categories. The analysis was conducted to determine if a correlation existed between risk assessment tool and patient outcomes. Data was analysed using GraphPad Instat Version 3 with a p value of <0.05 considered to be statistically significant. Patient characteristics were reported as mean and standard deviation for continuous data and categorical data reported as number and percentage. Results: Of the 533 patients included in the study, there were 268 (50.2%) female and 265 (49.8%) male patients with a mean age of 62.5 years (±16.4). The overall mean TITR was 78.3% (±12.7) with an overall haemorrhagic incidence of 0.41 events per patient. For the HAS BLED model, there was a haemorrhagic incidence of 0.08, 0.53, and 0.54 per patient in the low, moderate and high-risk categories respectively showing a statistically significant increase in incidence with increasing risk category. The CHADS2 model showed an increase in ischaemic events according to risk category with no ischaemic events in the low category, and an ischaemic incidence of 0.03 in the moderate category and 0.47 high-risk categories. Conclusion: An increasing haemorrhagic incidence correlated to an increase in the HAS BLED risk score in DVT patients treated with warfarin. Furthermore, a greater incidence of ischaemic events occurred in patients with an increase in CHADS2 category. In an Australian population of DVT patients, the HAS BLED and CHADS2 accurately predicts incidences of haemorrhage and ischaemic events respectively.Keywords: anticoagulant agent, deep vein thrombosis, risk assessment, warfarin
Procedia PDF Downloads 263357 Impact of Traffic Restrictions due to Covid19, on Emissions from Freight Transport in Mexico City
Authors: Oscar Nieto-Garzón, Angélica Lozano
Abstract:
In urban areas, on-road freight transportation creates several social and environmental externalities. Then, it is crucial that freight transport considers not only economic aspects, like retailer distribution cost reduction and service improvement, but also environmental effects such as global CO2 and local emissions (e.g. Particulate Matter, NOX, CO) and noise. Inadequate infrastructure development, high rate of urbanization, the increase of motorization, and the lack of transportation planning are characteristics that urban areas from developing countries share. The Metropolitan Area of Mexico City (MAMC), the Metropolitan Area of São Paulo (MASP), and Bogota are three of the largest urban areas in Latin America where air pollution is often a problem associated with emissions from mobile sources. The effect of the lockdown due to COVID-19 was analyzedfor these urban areas, comparing the same period (January to August) of years 2016 – 2019 with 2020. A strong reduction in the concentration of primary criteria pollutants emitted by road traffic were observed at the beginning of 2020 and after the lockdown measures.Daily mean concentration of NOx decreased 40% in the MAMC, 34% in the MASP, and 62% in Bogota. Daily mean ozone levels increased after the lockdown measures in the three urban areas, 25% in MAMC, 30% in the MASP and 60% in Bogota. These changes in emission patterns from mobile sources drastically changed the ambient atmospheric concentrations of CO and NOX. The CO/NOX ratioat the morning hours is often used as an indicator of mobile sources emissions. In 2020, traffic from cars and light vehicles was significantly reduced due to the first lockdown, but buses and trucks had not restrictions. In theory, it implies a decrease in CO and NOX from cars or light vehicles, maintaining the levels of NOX by trucks(or lower levels due to the congestion reduction). At rush hours, traffic was reduced between 50% and 75%, so trucks could get higher speeds, which would reduce their emissions. By means an emission model, it was found that an increase in the average speed (75%) would reduce the emissions (CO, NOX, and PM) from diesel trucks by up to 30%. It was expected that the value of CO/NOXratio could change due to thelockdownrestrictions. However, although there was asignificant reduction of traffic, CO/NOX kept its trend, decreasing to 8-9 in 2020. Hence, traffic restrictions had no impact on the CO/NOX ratio, although they did reduce vehicle emissions of CO and NOX. Therefore, these emissions may not adequately represent the change in the vehicle emission patterns, or this ratio may not be a good indicator of emissions generated by vehicles. From the comparison of the theoretical data and those observed during the lockdown, results that the real NOX reduction was lower than the theoretical reduction. The reasons could be that there are other sources of NOX emissions, so there would be an over-representation of NOX emissions generated by diesel vehicles, or there is an underestimation of CO emissions. Further analysis needs to consider this ratioto evaluate the emission inventories and then to extend these results forthe determination of emission control policies to non-mobile sources.Keywords: COVID-19, emissions, freight transport, latin American metropolis
Procedia PDF Downloads 136356 Prosodic Transfer in Foreign Language Learning: A Phonetic Crosscheck of Intonation and F₀ Range between Italian and German Native and Non-Native Speakers
Authors: Violetta Cataldo, Renata Savy, Simona Sbranna
Abstract:
Background: Foreign Language Learning (FLL) is characterised by prosodic transfer phenomena regarding pitch accents placement, intonation patterns, and pitch range excursion from the learners’ mother tongue to their Foreign Language (FL) which suggests that the gradual development of general linguistic competence in FL does not imply an equally correspondent improvement of the prosodic competence. Topic: The present study aims to monitor the development of prosodic competence of learners of Italian and German throughout the FLL process. The primary object of this study is to investigate the intonational features and the f₀ range excursion of Italian and German from a cross-linguistic perspective; analyses of native speakers’ productions point out the differences between this pair of languages and provide models for the Target Language (TL). A following crosscheck compares the L2 productions in Italian and German by non-native speakers to the Target Language models, in order to verify the occurrence of prosodic interference phenomena, i.e., type, degree, and modalities. Methodology: The subjects of the research are university students belonging to two groups: Italian native speakers learning German as FL and German native speakers learning Italian as FL. Both of them have been divided into three subgroups according to the FL proficiency level (beginners, intermediate, advanced). The dataset consists of wh-questions placed in situational contexts uttered in both speakers’ L1 and FL. Using a phonetic approach, analyses have considered three domains of intonational contours (Initial Profile, Nuclear Accent, and Terminal Contour) and two dimensions of the f₀ range parameter (span and level), which provide a basis for comparison between L1 and L2 productions. Findings: Results highlight a strong presence of prosodic transfer phenomena affecting L2 productions in the majority of both Italian and German learners, irrespective of their FL proficiency level; the transfer concerns all the three domains of the contour taken into account, although with different modalities and characteristics. Currently, L2 productions of German learners show a pitch span compression on the domain of the Terminal Contour compared to their L1 towards the TL; furthermore, German learners tend to use lower pitch range values in deviation from their L1 when improving their general linguistic competence in Italian FL proficiency level. Results regarding pitch range span and level in L2 productions by Italian learners are still in progress. At present, they show a similar tendency to expand the pitch span and to raise the pitch level, which also reveals a deviation from the L1 possibly in the direction of German TL. Conclusion: Intonational features seem to be 'resistant' parameters to which learners appear not to be particularly sensitive. By contrast, they show a certain sensitiveness to FL pitch range dimensions. Making clear which the most resistant and the most sensitive parameters are when learning FL prosody could lay groundwork for the development of prosodic trainings thanks to which learners could finally acquire a clear and natural pronunciation and intonation.Keywords: foreign language learning, German, Italian, L2 prosody, pitch range, transfer
Procedia PDF Downloads 286355 Platform Virtual for Joint Amplitude Measurement Based in MEMS
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez
Abstract:
Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation
Procedia PDF Downloads 259354 Cement Matrix Obtained with Recycled Aggregates and Micro/Nanosilica Admixtures
Authors: C. Mazilu, D. P. Georgescu, A. Apostu, R. Deju
Abstract:
Cement mortars and concretes are some of the most used construction materials in the world, global cement production being expected to grow to approx. 5 billion tons, until 2030. But, cement is an energy intensive material, the cement industry being responsible for cca. 7% of the world's CO2 emissions. Also, natural aggregates represent non-renewable resources, exhaustible, which must be used efficiently. A way to reduce the negative impact on the environment is the use of additional hydraulically active materials, as a partial substitute for cement in mortars and concretes and/or the use of recycled concrete aggregates (RCA) for the recovery of construction waste, according to EU Directive 2018/851. One of the most effective active hydraulic admixtures is microsilica and more recently, with the technological development on a nanometric scale, nanosilica. Studies carried out in recent years have shown that the introduction of SiO2 nanoparticles into cement matrix improves the properties, even compared to microsilica. This is due to the very small size of the nanosilica particles (<100nm) and the very large specific surface, which helps to accelerate cement hydration and acts as a nucleating agent to generate even more calcium hydrosilicate which densifies and compacts the structure. The cementitious compositions containing recycled concrete aggregates (RCA) present, in generally, inferior properties compared to those obtained with natural aggregates. Depending on the degree of replacement of natural aggregate, decreases the workability of mortars and concretes with RAC, decrease mechanical resistances and increase drying shrinkage; all being determined, in particular, by the presence to the old mortar attached to the original aggregate from the RAC, which makes its porosity high and the mixture of components to require more water for preparation. The present study aims to use micro and nanosilica for increase the performance of some mortars and concretes obtained with RCA. The research focused on two types of cementitious systems: a special mortar composition used for encapsulating Low Level radioactive Waste (LLW); a composition of structural concrete, class C30/37, with the combination of exposure classes XC4+XF1 and settlement class S4. The mortar was made with 100% recycled aggregate, 0-5 mm sort and in the case of concrete, 30% recycled aggregate was used for 4-8 and 8-16 sorts, according to EN 206, Annex E. The recycled aggregate was obtained from a specially made concrete for this study, which after 28 days was crushed with the help of a Retsch jaw crusher and further separated by sieving on granulometric sorters. The partial replacement of cement was done progressively, in the case of the mortar composition, with microsilica (3, 6, 9, 12, 15% wt.), nanosilica (0.75, 1.5, 2.25% wt.), respectively mixtures of micro and nanosilica. The optimal combination of silica, from the point of view of mechanical resistance, was later used also in the case of the concrete composition. For the chosen cementitious compositions, the influence of micro and/or nanosilica on the properties in the fresh state (workability, rheological characteristics) and hardened state (mechanical resistance, water absorption, freeze-thaw resistance, etc.) is highlighted.Keywords: cement, recycled concrete aggregates, micro/nanosilica, durability
Procedia PDF Downloads 68353 The City Narrated from the Hill, Evaluation of Natural Fabric in Urban Plans: A Case Study of Santiago de Chile
Authors: Monica Sanchez
Abstract:
What responsibility does urban planning have on climate changes? How does the territory give us answers of resilience? Historically, urban plans have civilized territories: waters are channeled, grounds are sealed, foreign species are incorporated, native ones are extinguished, and/or enclosed spaces are heated or cooled. Socially this facilitates coexistence, but in turn brings negative environmental consequences. The past fifty years, mankind has tried to redirect these consequences through different strategies. Research studies produced strategies designed to alleviate climate change. Exploring the nature of territories has been incorporated in urban planning to discover natures response. The case to be studied is Santiago, Chile: for its combined impacts of climate change and the significant response by this city on climate governance in the last decades. Warmer areas in Santiago are seen in the areas of high-density buildings such as the commune of Recoleta, while the coldest are characterized by the predominance of low residential densities as the commune of Providencia. These two communes are separated and complemented by an undulating body that comes from the Andes mountains called San Cristobal Hill. What if the hill were taken into account when making roads, zoning and buildings? Was it difficult to prolong in the urban plans the hill characteristics to the city solving the intersection with other natural areas? Apparently it was, because the projected-profile informs us that the planned strategies used correspond to the same operations used in the flat areas of Santiago. This research focuses on: explaining the geographic relationships between city-hill; explaining the planning process around the hill with a morphological analysis; evaluating how the hill has been considered the in the city in the plans that intended to cushion the environmental impacts and studying what is missing on the hill and city to strengthen their integration. Therefore, the research will have different scales of understanding: addressing territorial scale -understanding the vegetation, topography and hydrology; a city scale -analyzing urban plans that Santiago has dealt with the environment and city; and a local scale -studying the integration and public spaces and coverage- norms of the adjacent communes. The expected outcome is to decipher possible deficits and capabilities of the current urban plans for climate change. It is anticipated that the hill and valley is now trying to reconcile after such a long separation. Yet it seems that never will prevail all the Rules of Nature, but the Urban Rules. The plans will require pruning, irrigation, control of invasive alien species and public safety standards, but will be rejoining a dose of nature with the building environment -this will protect us better from it from the time that we feared from it and knew little about it. Today we know a little more, enough to adapt to the process. Although nature is not perceived and we ignore it, it has a remarkable ability to respond.Keywords: resilience, climate change, urban plans, land use, hills and cities, heat islands, morphology
Procedia PDF Downloads 367352 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models
Authors: Haya Salah, Srinivas Sharan
Abstract:
Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time
Procedia PDF Downloads 121351 Storage of Organic Carbon in Chemical Fractions in Acid Soil as Influenced by Different Liming
Authors: Ieva Jokubauskaite, Alvyra Slepetiene, Danute Karcauskiene, Inga Liaudanskiene, Kristina Amaleviciute
Abstract:
Soil organic carbon (SOC) is the key soil quality and ecological stability indicator, therefore, carbon accumulation in stable forms not only supports and increases the organic matter content in the soil, but also has a positive effect on the quality of soil and the whole ecosystem. Soil liming is one of the most common ways to improve the carbon sequestration in the soil. Determination of the optimum intensity and combinations of liming in order to ensure the optimal carbon quantitative and qualitative parameters is one of the most important tasks of this work. The field experiments were carried out at the Vezaiciai Branch of Lithuanian Research Centre for Agriculture and Forestry (LRCAF) during the 2011–2013 period. The effect of liming with different intensity (at a rate 0.5 every 7 years and 2.0 every 3-4 years) was investigated in the topsoil of acid moraine loam Bathygleyic Dystric Glossic Retisol. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LRCAF. Soil samples for chemical analyses were taken from the topsoil after harvesting. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) at 590 nm wavelength using glucose standards. SOC fractional composition was determined by Ponomareva and Plotnikova version of classical Tyurin method. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR in water extract at soil-water ratio 1:5. Spectral properties (E4/E6 ratio) of humic acids were determined by measuring the absorbance of humic and fulvic acids solutions at 465 and 665 nm. Our study showed a negative statistically significant effect of periodical liming (at 0.5 and 2.0 liming rates) on SOC content in the soil. The content of SOC was 1.45% in the unlimed treatment, while in periodically limed at 2.0 liming rate every 3–4 years it was approximately by 0.18 percentage points lower. It was revealed that liming significantly decreased the DOC concentration in the soil. The lowest concentration of DOC (0.156 g kg-1) was established in the most intensively limed (2.0 liming rate every 3–4 years) treatment. Soil liming exerted an increase of all humic acids and fulvic acid bounded with calcium fractions content in the topsoil. Soil liming resulted in the accumulation of valuable humic acids. Due to the applied liming, the HR/FR ratio, indicating the quality of humus increased to 1.08 compared with that in unlimed soil (0.81). Intensive soil liming promoted the formation of humic acids in which groups of carboxylic and phenolic compounds predominated. These humic acids are characterized by a higher degree of condensation of aromatic compounds and in this way determine the intensive organic matter humification processes in the soil. The results of this research provide us with the clear information on the characteristics of SOC change, which could be very useful to guide the climate policy and sustainable soil management.Keywords: acid soil, carbon sequestration, long–term liming, soil organic carbon
Procedia PDF Downloads 229350 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 131349 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets
Authors: Basiru Amuneni
Abstract:
Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality
Procedia PDF Downloads 91348 Social Economic Factors Associated with the Nutritional Status of Children In Western Uganda
Authors: Baguma Daniel Kajura
Abstract:
The study explores socio-economic factors, health related and individual factors that influence the breastfeeding habits of mothers and their effect on the nutritional status of their infants in the Rwenzori region of Western Uganda. A cross-sectional research design was adopted, and it involved the use of self-administered questionnaires, interview guides, and focused group discussion guides to assess the extent to which socio-demographic factors associated with breastfeeding practices influence child malnutrition. Using this design, data was collected from 276 mother-paired infants out of the selected 318 mother-paired infants over a period of ten days. Using a sample size formula by Kish Leslie for cross-sectional studies N= Zα2 P (1- P) / δ2, where N= sample size estimate of paired mother paired infants. P= assumed true population prevalence of mother–paired infants with malnutrition cases, P = 29.3%. 1-P = the probability of mother-paired infants not having malnutrition, so 1-P = 70.7% Zα = Standard normal deviation at 95% confidence interval corresponding to 1.96.δ = Absolute error between the estimated and true population prevalence of malnutrition of 5%. The calculated sample size N = 1.96 × 1.96 (0.293 × 0.707) /0,052= 318 mother paired infants. Demographic and socio-economic data for all mothers were entered into Microsoft Excel software and then exported to STATA 14 (StataCorp, 2015). Anthropometric measurements were taken for all children by the researcher and the trained assistants who physically weighed the children. The use of immunization card was used to attain the age of the child. The bivariate logistic regression analysis was used to assess the relationship between socio-demographic factors associated with breastfeeding practices and child malnutrition. The multivariable regression analysis was used to draw a conclusion on whether or not there are any true relationships between the socio-demographic factors associated with breastfeeding practices as independent variables and child stunting and underweight as dependent variables in relation to breastfeeding practices. Descriptive statistics on background characteristics of the mothers were generated and presented in frequency distribution tables. Frequencies and means were computed, and the results were presented using tables, then, we determined the distribution of stunting and underweight among infants by the socioeconomic and demographic factors. Findings reveal that children of mothers who used milk substitutes besides breastfeeding are over two times more likely to be stunted compared to those whose mothers exclusively breastfed them. Feeding children with milk substitutes instead of breastmilk predisposes them to both stunting and underweight. Children of mothers between 18 and 34 years of age are less likely to be underweight, as were those who were breastfed over ten times a day. The study further reveals that 55% of the children were underweight, and 49% were stunted. Of the underweight children, an equal number (58/151) were either mildly or moderately underweight (38%), and 23% (35/151) were severely underweight. Empowering community outreach programs by increasing knowledge and increased access to services on integrated management of child malnutrition is crucial to curbing child malnutrition in rural areas.Keywords: infant and young child feeding, breastfeeding, child malnutrition, maternal health
Procedia PDF Downloads 20347 Exploring the Impact of Eye Movement Desensitization and Reprocessing (EMDR) And Mindfulness for Processing Trauma and Facilitating Healing During Ayahuasca Ceremonies
Authors: J. Hash, J. Converse, L. Gibson
Abstract:
Plant medicines are of growing interest for addressing mental health concerns. Ayahuasca, a traditional plant-based medicine, has established itself as a powerful way of processing trauma and precipitating healing and mood stabilization. Eye Movement Desensitization and Reprocessing (EMDR) is another treatment modality that aids in the rapid processing and resolution of trauma. We investigated group EMDR therapy, G-TEP, as a preparatory practice before Ayahuasca ceremonies to determine if the combination of these modalities supports participants in their journeys of letting go of past experiences negatively impacting mental health, thereby accentuating the healing of the plant medicine. We surveyed 96 participants (51 experimental G-TEP, 45 control grounding prior to their ceremony; age M=38.6, SD=9.1; F=57, M=34; white=39, Hispanic/Latinx=23, multiracial=11, Asian/Pacific Islander=10, other=7) in a pre-post, mixed methods design. Participants were surveyed for demographic characteristics, symptoms of PTSD and cPTSD (International Trauma Questionnaire (ITQ), depression (Beck Depression Inventory, BDI), and stress (Perceived Stress Scale, PSS) before the ceremony and at the end of the ceremony weekend. Open-ended questions also inquired about their expectations of the ceremony and results at the end. No baseline differences existed between the control and experimental participants. Overall, participants reported a decrease in meeting the threshold for PTSD symptoms (p<0.01); surprisingly, the control group reported significantly fewer thresholds met for symptoms of affective dysregulation, 2(1)=6.776, p<.01, negative self-concept, 2 (1)=7.122, p<.01, and disturbance in relationships, 2 (1)=9.804, p<.01, on subscales of the ITQ as compared to the experimental group. All participants also experienced a significant decrease in scores on the BDI, t(94)=8.995, p<.001, and PSS, t(91)=6.892, p<.001. Similar to patterns of PTSD symptoms, the control group reported significantly lower scores on the BDI, t(65.115)=-2.587, p<.01, and a trend toward lower PSS, t(90)=-1.775, p=.079 (this was significant with a one-sided test at p<.05), compared to the experimental group following the ceremony. Qualitative interviews among participants revealed a potential explanation for these relatively higher levels of depression and stress in the experimental group following the ceremony. Many participants reported needing more time to process their experience to gain an understanding of the effects of the Ayahuasca medicine. Others reported a sense of hopefulness and understanding of the sources of their trauma and the necessary steps to heal moving forward. This suggests increased introspection and openness to processing trauma, therefore making them more receptive to their emotions. The integration process of an Ayahuasca ceremony is a week- to months-long process that was not accessible in this stage of research, yet it is an integral process to understanding the full effects of the Ayahuasca medicine following the closure of a ceremony. Our future research aims to assess participants weeks into their integration process to determine the effectiveness of EMDR, and if the higher levels of depression and stress indicate the initial reaction to greater awareness of trauma and receptivity to healing.Keywords: ayahuasca, EMDR, PTSD, mental health
Procedia PDF Downloads 65346 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment
Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen
Abstract:
Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time
Procedia PDF Downloads 77345 Analysis of Tilting Cause of a Residential Building in Durres by the Use of Cptu Test
Authors: Neritan Shkodrani
Abstract:
On November 26, 2019, an earthquake hit the central western part of Albania. It was assessed as Mw 6.4. Its epicenter was located offshore north western Durrës, about 7 km north of the city. In this paper, the consequences of settlements of very soft soils have been discussed for the case of a residential building, mentioned as “K Building”, which was suffering a significant tilting after the earthquake. “KBuilding” is an RC framed building having 12+1 (basement) storiesand a floor area of 21000 m2. The construction of the building was completed in 2012. “KBuilding”, located in Durres city, suffered severe non-structural damage during November 26, 2019, Durrës Earthquake sequences. During the in-site inspections immediately after the earthquake, the general condition of the buildings, the presence of observable settlements on the ground, and the crack situation in the structure were determined, and damage inspection were performed. It was significant to note that the “K Building” presented tilting that might be attributed, as it was believed at the beginning, partially to the failure of the columns of the ground floor and partially to liquefaction phenomena, but it did not collapse. At the first moment was not clear if the foundation had a bearing capacity failure or the foundation failed because of the soil liquefaction. Geotechnical soil investigations by using CPTU test were executed, and their data are usedto evaluatebearing capacity, consolidation settlement of the mat foundation, and soil liquefaction since they were believed to be the main reasons of this building tilting.Geotechnical soil investigation consist in 5 (five) Static Cone Penetration tests with pore pressure measurement (piezocone test). They reached a penetration depth of 20.0 m to 30.0 mand, clearly shown the presence of very soft and organic soils in the soil profile of the site. Geotechnical CPT based analysis of bearing capacity, consolidation, and secondary settlement are applied, and results are reported for each test. These results shown very small values of allowable bearing capacity and very high values of consolidation and secondary settlements. Liquefaction analysis based on the data of CPTU tests and the characteristics of ground shaking of the mentioned earthquake has shown the possibility of liquefaction for some layers of the considered soil profile, but the estimated vertical settlements are at a small range and clearly shown that the main reason of the building tilting was not related to the consequences of liquefaction, but was an existing settlement caused from the applied bearing pressure of this building. All the CPTU tests were carried out on August 2021, almost two years after the November 26, 2019, Durrës Earthquake and when the building itself was demolished. After removing the mat foundation on September 2021, it was possible to carry out CPTU tests even on the footprint of the existing building, which made possible to observe the effects of long time applied of foundation bearing pressure to the consolidation on the considered soil profile.Keywords: bearing capacity, cone penetration test, consolidation settlement, secondary settlement, soil liquefaction, etc
Procedia PDF Downloads 96344 Linking Enhanced Resting-State Brain Connectivity with the Benefit of Desirable Difficulty to Motor Learning: A Functional Magnetic Resonance Imaging Study
Authors: Chien-Ho Lin, Ho-Ching Yang, Barbara Knowlton, Shin-Leh Huang, Ming-Chang Chiang
Abstract:
Practicing motor tasks arranged in an interleaved order (interleaved practice, or IP) generally leads to better learning than practicing tasks in a repetitive order (repetitive practice, or RP), an example of how desirable difficulty during practice benefits learning. Greater difficulty during practice, e.g. IP, is associated with greater brain activity measured by higher blood-oxygen-level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) in the sensorimotor areas of the brain. In this study resting-state fMRI was applied to investigate whether increase in resting-state brain connectivity immediately after practice predicts the benefit of desirable difficulty to motor learning. 26 healthy adults (11M/15F, age = 23.3±1.3 years) practiced two sets of three sequences arranged in a repetitive or an interleaved order over 2 days, followed by a retention test on Day 5 to evaluate learning. On each practice day, fMRI data were acquired in a resting state after practice. The resting-state fMRI data was decomposed using a group-level spatial independent component analysis (ICA), yielding 9 independent components (IC) matched to the precuneus network, primary visual networks (two ICs, denoted by I and II respectively), sensorimotor networks (two ICs, denoted by I and II respectively), the right and the left frontoparietal networks, occipito-temporal network, and the frontal network. A weighted resting-state functional connectivity (wRSFC) was then defined to incorporate information from within- and between-network brain connectivity. The within-network functional connectivity between a voxel and an IC was gauged by a z-score derived from the Fisher transformation of the IC map. The between-network connectivity was derived from the cross-correlation of time courses across all possible pairs of ICs, leading to a symmetric nc x nc matrix of cross-correlation coefficients, denoted by C = (pᵢⱼ). Here pᵢⱼ is the extremum of cross-correlation between ICs i and j; nc = 9 is the number of ICs. This component-wise cross-correlation matrix C was then projected to the voxel space, with the weights for each voxel set to the z-score that represents the above within-network functional connectivity. The wRSFC map incorporates the global characteristics of brain networks measured by the between-network connectivity, and the spatial information contained in the IC maps measured by the within-network connectivity. Pearson correlation analysis revealed that greater IP-minus-RP difference in wRSFC was positively correlated with the RP-minus-IP difference in the response time on Day 5, particularly in brain regions crucial for motor learning, such as the right dorsolateral prefrontal cortex (DLPFC), and the right premotor and supplementary motor cortices. This indicates that enhanced resting brain connectivity during the early phase of memory consolidation is associated with enhanced learning following interleaved practice, and as such wRSFC could be applied as a biomarker that measures the beneficial effects of desirable difficulty on motor sequence learning.Keywords: desirable difficulty, functional magnetic resonance imaging, independent component analysis, resting-state networks
Procedia PDF Downloads 203343 An Australian Tertiary Centre Experience of Complex Endovascular Aortic Repairs
Authors: Hansraj Bookun, Rachel Xuan, Angela Tan, Kejia Wang, Animesh Singla, David Kim, Christopher Loupos, Jim Iliopoulos
Abstract:
Introduction: Complex endovascular aortic aneursymal repairs with fenestrated and branched endografts require customised devices to exclude the pathology while reducing morbidity and mortality, which was historically associated with open repair of complex aneurysms. Such endovascular procedures have predominantly been performed in a large volume dedicated tertiary centres. We present here our nine year multidisciplinary experience with this technology in an Australian tertiary centre. Method: This was a cross-sectional, single-centre observational study of 670 patients who had undergone complex endovascular aortic aneurysmal repairs with conventional endografts, fenestrated endografts, and iliac-branched devices from January 2010 to July 2019. Descriptive statistics were used to characterise our sample with regards to demographic and perioperative variables. Homogeneity of the sample was tested using multivariant regression, which did not identify any statistically significant confounding variables. Results: 670 patients of mean age 74, were included (592 males) and the comorbid burden was as follows: ischemic heart disease (55%), diabetes (18%), hypertension (90%), stage four or greater kidney impairment (8%) and current or ex-smoking (78%). The main indications for surgery were elective aneurysms (86%), symptomatic aneurysms (5%), and rupture aneurysms (5%). 106 patients (16%) underwent fenestrated or branched endograft repairs. The mean length of stay was 7.6 days. 2 patients experienced reactionary bleeds, 11 patients had access wound complications (6 lymph fistulae, 5 haematoms), 11 patients had cardiac complications (5 arrhythmias, 3 acute myocadial infarctions, 3 exacerbation of congestive cardiac failure), 10 patients had respiratory complications, 8 patients had renal impairment, 4 patients had gastrointestinal complications, 2 patients suffered from paraplegia, 1 major stroke, 1 minor stroke, and 1 acute brain syndrome. There were 4 vascular occlusions requiring further arterial surgery, 4 type I endoleaks, 4 type II endoleaks, 3 episodes of thromboembolism, and 2 patients who required further arterial operations in the setting of patient vessels. There were 9 unplanned returns to the theatre. Discussion: Our numbers of 10 years suggest that we are not a dedicated high volume centre focusing on aortic repairs. However, we have achieved significantly low complication rates. This can be attributed to our multidisciplinary approach with the intraoperative involvement of skilled interventional radiologists and vascular surgeons as well as postoperative protocols with particular attention to spinal cord protection. Additionally, we have a ratified perioperative pathway that involves multidisciplinary team discussions of patient-related factors and lesion-centered characteristics, which allows for holistic, patient-centered care.Keywords: aneurysm, aortic, endovascular, fenestrated
Procedia PDF Downloads 122342 An Early Intervention Framework for Supporting Students’ Mathematical Development in the Transition to University STEM Programmes
Authors: Richard Harrison
Abstract:
Developing competency in mathematics and related critical thinking skills is essential to the education of undergraduate students of Science, Technology, Engineering and Mathematics (STEM). Recently, the HE sector has been impacted by a seemingly widening disconnect between the mathematical competency of incoming first-year STEM students and their entrance qualification tariffs. Despite relatively high grades in A-Level Mathematics, students may initially lack fundamental skills in key areas such as algebraic manipulation and have limited capacity to apply problem solving strategies. Compounded by compensatory measures applied to entrance qualifications during the pandemic, there has been an associated decline in student performance on introductory university mathematics modules. In the UK, a number of online resources have been developed to help scaffold the transition to university mathematics. However, in general, these do not offer a structured learning journey focused on individual developmental needs, nor do they offer an experience coherent with the teaching and learning characteristics of the destination institution. In order to address some of these issues, a bespoke framework has been designed and implemented on our VLE in the Faculty of Engineering & Physical Sciences (FEPS) at the University of Surrey. Called the FEPS Maths Support Framework, it was conceived to scaffold the mathematical development of individuals prior to entering the university and during the early stages of their transition to undergraduate studies. More than 90% of our incoming STEM students voluntarily participate in the process. Students complete a set of initial diagnostic questions in the late summer. Based on their performance and feedback on these questions, they are subsequently guided to self-select specific mathematical topic areas for review using our proprietary resources. This further assists students in preparing for discipline related diagnostic tests. The framework helps to identify students who are mathematically weak and facilitates early intervention to support students according to their specific developmental needs. This paper presents a summary of results from a rich data set captured from the framework over a 3-year period. Quantitative data provides evidence that students have engaged and developed during the process. This is further supported by process evaluation feedback from the students. Ranked performance data associated with seven key mathematical topic areas and eight engineering and science discipline areas reveals interesting patterns which can be used to identify more generic relative capabilities of the discipline area cohorts. In turn, this facilitates evidence based management of the mathematical development of the new cohort, informing any associated adjustments to teaching and learning at a more holistic level. Evidence is presented establishing our framework as an effective early intervention strategy for addressing the sector-wide issue of supporting the mathematical development of STEM students transitioning to HEKeywords: competency, development, intervention, scaffolding
Procedia PDF Downloads 65341 Green Production of Chitosan Nanoparticles and their Potential as Antimicrobial Agents
Authors: L. P. Gomes, G. F. Araújo, Y. M. L. Cordeiro, C. T. Andrade, E. M. Del Aguila, V. M. F. Paschoalin
Abstract:
The application of nanoscale materials and nanostructures is an emerging area, these since materials may provide solutions to technological and environmental challenges in order to preserve the environment and natural resources. To reach this goal, the increasing demand must be accompanied by 'green' synthesis methods. Chitosan is a natural, nontoxic, biopolymer derived by the deacetylation of chitin and has great potential for a wide range of applications in the biological and biomedical areas, due to its biodegradability, biocompatibility, non-toxicity and versatile chemical and physical properties. Chitosan also presents high antimicrobial activities against a wide variety of pathogenic and spoilage microorganisms. Ultrasonication is a common tool for the preparation and processing of polymer nanoparticles. It is particularly effective in breaking up aggregates and in reducing the size and polydispersity of nanoparticles. High-intensity ultrasonication has the potential to modify chitosan molecular weight and, thus, alter or improve chitosan functional properties. The aim of this study was to evaluate the influence of sonication intensity and time on the changes of commercial chitosan characteristics, such as molecular weight and its potential antibacterial activity against Gram-negative bacteria. The nanoparticles (NPs) were produced from two commercial chitosans, of medium molecular weight (CS-MMW) and low molecular weight (CS-LMW) from Sigma-Aldrich®. These samples (2%) were solubilized in 100 mM sodium acetate pH 4.0, placed on ice and irradiated with an ultrasound SONIC ultrasonic probe (model 750 W), equipped with a 1/2" microtip during 30 min at 4°C. It was used on constant duty cycle and 40% amplitude with 1/1s intervals. The ultrasonic degradation of CS-MMW and CS-LMW were followed up by means of ζ-potential (Brookhaven Instruments, model 90Plus) and dynamic light scattering (DLS) measurements. After sonication, the concentrated samples were diluted 100 times and placed in fluorescence quartz cuvettes (Hellma 111-QS, 10 mm light path). The distributions of the colloidal particles were calculated from the DLS and ζ-potential are measurements taken for the CS-MMW and CS-LMW solutions before and after (CS-MMW30 and CS-LMW30) sonication for 30 min. Regarding the results for the chitosan sample, the major bands can be distinguished centered at Radius hydrodynamic (Rh), showed different distributions for CS-MMW (Rh=690.0 nm, ζ=26.52±2.4), CS-LMW (Rh=607.4 and 2805.4 nm, ζ=24.51±1.29), CS-MMW30 (Rh=201.5 and 1064.1 nm, ζ=24.78±2.4) and CS-LMW30 (Rh=492.5, ζ=26.12±0.85). The minimal inhibitory concentration (MIC) was determined using different chitosan samples concentrations. MIC values were determined against to E. coli (106 cells) harvested from an LB medium (Luria-Bertani BD™) after 18h growth at 37 ºC. Subsequently, the cell suspension was serially diluted in saline solution (0.8% NaCl) and plated on solid LB at 37°C for 18 h. Colony-forming units were counted. The samples showed different MICs against E. coli for CS-LMW (1.5mg), CS-MMW30 (1.5 mg/mL) and CS-LMW30 (1.0 mg/mL). The results demonstrate that the production of nanoparticles by modification of their molecular weight by ultrasonication is simple to be performed and dispense acid solvent addition. Molecular weight modifications are enough to provoke changes in the antimicrobial potential of the nanoparticles produced in this way.Keywords: antimicrobial agent, chitosan, green production, nanoparticles
Procedia PDF Downloads 326340 Enhancing Archaeological Sites: Interconnecting Physically and Digitally
Authors: Eleni Maistrou, D. Kosmopoulos, Carolina Moretti, Amalia Konidi, Katerina Boulougoura
Abstract:
InterArch is an ongoing research project that has been running since September 2020. It aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. The research project is co‐financed by the European Union and Greek national funds, through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE – INNOVATE (project code: Τ2ΕΔΚ-01659). It involves mutual collaboration between academic and cultural institutions and the contribution of an IT applications development company. The research will be completed by July 2023 and will run as a pilot project for the city of Ancient Messene, a place of outstanding natural beauty in the west of Peloponnese, which is considered one of the most important archaeological sites in Greece. The applied research project integrates an interactive approach to the natural environment, aiming at a manifold sensory experience. It combines the physical space of the archaeological site with the digital space of archaeological and cultural data while at the same time, it embraces storytelling processes by engaging an interdisciplinary approach that familiarizes the user with multiple semantic interpretations. The mingling of the real-world environment with its digital and cultural components by using augmented reality techniques could potentially transform the visit on-site into an immersive multimodal sensory experience. To this purpose, an extensive spatial analysis along with a detailed evaluation of the existing digital and non-digital archives is proposed in our project, intending to correlate natural landscape morphology (including archaeological material remains and environmental characteristics) with the extensive historical records and cultural digital data. On-site research was carried out, during which visitors’ itineraries were monitored and tracked throughout the archaeological visit using GPS locators. The results provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location. InterArch aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. Extensive spatial analysis, along with a detailed evaluation of the existing digital and non-digital archives, is used in our project, intending to correlate natural landscape morphology with the extensive historical records and cultural digital data. The results of the on-site research provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location.Keywords: archaeological site, digital space, semantic interpretations, cultural heritage
Procedia PDF Downloads 70339 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 264338 Wildlife Communities in the Service of Extensively Managed Fishpond Systems – Advantages of a Symbiotic Relationship
Authors: Peter Palasti, Eva Kerepeczki
Abstract:
Extensive fish farming is one of the most traditional forms of aquaculture in Europe, usually practiced in large pond systems with earthen beds, where the growth of fish is based on natural feed and supplementary foraging. These farms have semi-natural environmental conditions, sustaining diverse wildlife communities that have complex effects on fish production and also provide a livelihood for many wetland related taxa. Based on their characteristics, these communities could be sources of various ecosystem services (ESs), that could also enhance the value and enable the multifunctional use of these artificially constructed and maintained production zones. To identify and estimate the whole range of wildlife’s contribution we have conducted an integrated assessment in an extensively managed pond system in Biharugra, Hungary, where we studied 14 previously revealed ESs: fish and reed production, water storage, water and air quality regulation, CO2 absorption, groundwater recharge, aesthetics, recreational activities, inspiration, education, scientific research, presence of semi-natural habitats and useful/protected species. ESs were collected through structured interviews with the local experts of all major stakeholder groups, where we have also gathered information about the known forms, levels (none, low, high) and orientations (positive, negative) of the contributions of the wildlife community. After that, a quantitative analysis was carried out: we calculated the total mean value of the services being used between 2014-16, then we estimated the value and percentage of contributions. For the quantification, we mainly used biophysical indicators with the available data and empirical knowledge of the local experts. During the interviews, 12 of the previously listed services (85%) were mentioned to be related to wildlife community, consisting of 5 fully (e.g., recreation, reed production) and seven partially dependent ESs (e.g., inspiration, CO2 absorption) from our list. The orientation of the contributions was said to be positive almost every time; however, in the case of fish production, the feeding habit of some wild species (Phalacrocorax carbo, Lutra lutra) caused significant losses in fish stocks in the study period. During the biophysical assessment, we calculated the total mean value of the services and quantified the aid of wildlife community at the following services: fish and reed production, recreation, CO2 absorption, and the presence of semi-natural habitats and wild species. The combined results of our interviews and biophysical evaluations showed that the presence of wildlife community not just greatly increased the productivity of the fish farms in Biharugra (with ~53% of natural yield generated by planktonic and benthic communities) but also enhanced the multifunctionality of the system through expanding the quality and number of its services. With these abilities, extensively managed fishponds could play an important role in the future as refugia for wetland related services and species threatened by the effects of global warming.Keywords: ecosystem services, fishpond systems, integrated assessment, wildlife community
Procedia PDF Downloads 115