Search results for: dynamic simulations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5590

Search results for: dynamic simulations

610 Targeting Apoptosis by Novel Adamantane Analogs as an Emerging Therapy for the Treatment of Hepatocellular Carcinoma Through EGFR, Bcl-2/BAX Cascade

Authors: Hanan M. Hassan, Laila Abouzeid, Lamya H. Al-Wahaibi, George S. G. Shehatou, Ali A. El-Emam

Abstract:

Cancer is a major public health problem and the second leading cause of death worldwide. In 2020, cancer diagnosis and treatment have been negatively affected by the coronavirus 2019 (COVID-19) pandemic. During the quarantine, because of the limited access to healthcare and avoiding exposure to COVID-19 as a contagious disease; patients of cancer suffered deferments in follow-up and treatment regimens leading to substantial worsening of disease, death, and increased healthcare costs. Thus, this study is designed to investigate the molecular mechanisms by which adamantne derivatives attenuate hepatocllular carcinoma experimentally and theoretically. There is a close association between increased resistance to anticancer drugs and defective apoptosis that considered a causative factor for oncogenesis. Cancer cells use different molecular pathways to inhibit apoptosis, BAX and Bcl-2 proteins have essential roles in the progression or inhibition of intrinsic apoptotic pathways triggered by mitochondrial dysfunction. Therefore, their balance ratio can promote the cellular apoptotic fate. In this study, the in vitro cytotoxic effects of seven synthetic adamantyl isothiorea derivatives were evaluated against five human tumor cell lines by MTT assay. Compounds 5 and 6 showed the best results, mostly against hepatocellular carcinoma (HCC). Hence, in vivo studies were performed in male Sprague-Dawley (SD) rats in which experimental hepatocellular carcinoma was induced with thioacetamide (TAA) (200 mg/kg, i.p., twice weekly) for 16 weeks. The most promising compounds, 5 and 6, were administered to treat liver cancer rats at a dose of 10 mg/kg/day for an additional two weeks, and the effects were compared with doxorubicin (DR), the anticancer drug. Hepatocellular carcinoma was evidenced by a dramatic increase in liver indices, oxidative stress markers, and immunohistochemical studies that were accompanied by a plethora of inflammatory mediators and alterations in the apoptotic cascade. Our results showed that treatment with adamantane derivatives 5 and 6 significantly suppressed fibrosis, inflammation, and other histopathological insults resulting in the diminished formation of hepatocyte tumorigenesis. Moreover, administration of the tested compounds resulted in amelioration of EGFR protein expression, upregulation of BAX, and lessening down of Bcl-2 levels that prove their role as apoptosis inducers. Also, the docking simulations performed for adamantane showed good fit and binding to the EGFR protein through hydrogen bond formation with conservative amino acids, which gives a shred of strong evidence for its hepatoprotective effect. In most analyses, the effects of compound 6 were more comparable to DR than compound 5. Our findings suggest that adamantane derivatives 5 and 6 are shown to have cytotoxic activity against HCC in vitro and in vivo, by more than one mechanism, possibly by inhibiting the TLR4-MyD88-NF-κB pathway and targeting EGFR signaling.

Keywords: adamantane, EGFR, HCC, apoptosis

Procedia PDF Downloads 146
609 Development of Immersive Virtual Reality System for Planning of Cargo Loading Operations

Authors: Eugene Y. C. Wong, Daniel Y. W. Mo, Cosmo T. Y. Ng, Jessica K. Y. Chan, Leith K. Y. Chan, Henry Y. K. Lau

Abstract:

The real-time planning visualisation, precise allocation and loading optimisation in air cargo load planning operations are increasingly important as more considerations are needed on dangerous cargo loading, locations of lithium batteries, weight declaration and limited aircraft capacity. The planning of the unit load devices (ULD) can often be carried out only in a limited number of hours before flight departure. A dynamic air cargo load planning system is proposed with the optimisation of cargo load plan and visualisation of planning results in virtual reality systems. The system aims to optimise the cargo load planning and visualise the simulated loading planning decision on air cargo terminal operations. Adopting simulation tools, Cave Automatic Virtual Environment (CAVE) and virtual reality technologies, the results of planning with reference to weight and balance, Unit Load Device (ULD) dimensions, gateway, cargo nature and aircraft capacity are optimised and presented. The virtual reality system facilities planning, operations, education and training. Staff in terminals are usually trained in a traditional push-approach demonstration with enormous manual paperwork. With the support of newly customized immersive visualization environment, users can master the complex air cargo load planning techniques in a problem based training with the instant result being immersively visualised. The virtual reality system is developed with three-dimensional (3D) projectors, screens, workstations, truss system, 3D glasses, and demonstration platform and software. The content will be focused on the cargo planning and loading operations in an air cargo terminal. The system can assist decision-making process during cargo load planning in the complex operations of air cargo terminal operations. The processes of cargo loading, cargo build-up, security screening, and system monitoring can be further visualised. Scenarios are designed to support and demonstrate the daily operations of the air cargo terminal, including dangerous goods, pets and animals, and some special cargos.

Keywords: air cargo load planning, optimisation, virtual reality, weight and balance, unit load device

Procedia PDF Downloads 345
608 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case

Authors: Gulhan Sen

Abstract:

Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.

Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making

Procedia PDF Downloads 245
607 Analysis of Flow Dynamics of Heated and Cooled Pylon Upstream to the Cavity past Supersonic Flow with Wall Heating and Cooling

Authors: Vishnu Asokan, Zaid M. Paloba

Abstract:

Flow over cavities is an important area of research due to the significant change in flow physics caused by cavity aspect ratio, free stream Mach number and the nature of upstream boundary layer approaching the cavity leading edge. Cavity flow finds application in aircraft wheel well, weapons bay, combustion chamber of scramjet engines, etc. These flows are highly unsteady, compressible and turbulent and it involves mass entrainment coupled with acoustics phenomenon. Variation of flow dynamics in an angled cavity with a heated and cooled pylon upstream to the cavity with spatial combinations of heat flux addition and removal to the wall studied numerically. The goal of study is to investigate the effect of energy addition, removal to the cavity walls and pylon cavity flow dynamics. Preliminary steady state numerical simulations on inclined cavities with heat addition have shown that wall pressure profiles, as well as the recirculation, are influenced by heat transfer to the compressible fluid medium. Such a hybrid control of cavity flow dynamics in the form of heat transfer and pylon geometry can open out greater opportunities in enhancement of mixing and flame holding requirements of supersonic combustors. Addition of pylon upstream to the cavity reduces the acoustic oscillations emanating from the geometry. A numerical unsteady analysis of supersonic flow past cavities exposed to cavity wall heating and cooling with heated and cooled pylon helps to get a clear idea about the oscillation suppression in the cavity. A Cavity of L/D 4 and aft wall angle 22 degree with an upstream pylon of h/D=1.5 mm with a wall angle 29 degree exposed to supersonic flow of Mach number 2 and heat flux of 40 W/cm² and -40 W/cm² modeled for the above study. In the preliminary study, the domain is modeled and validated numerically with a turbulence model of SST k-ω using an HLLC implicit scheme. Both qualitative and quantitative flow data extracted and analyzed using advanced CFD tools. Flow visualization is done using numerical Schlieren method as the fluid medium gives the density variation. The heat flux addition to the wall increases the secondary vortex size of the cavity and removal of energy leads to the reduction in vortex size. The flow field turbulence seems to be increasing at higher heat flux. The shear layer thickness increases as heat flux increases. The steady state analysis of wall pressure shows that there is variation on wall pressure as heat flux increases. Shift in frequency of unsteady wall pressure analysis is an interesting observation for the above study. The time averaged skin friction seems to be reducing at higher heat flux due to the variation in viscosity of fluid inside the cavity.

Keywords: energy addition, frequency shift, Numerical Schlieren, shear layer, vortex evolution

Procedia PDF Downloads 143
606 Sustainable Management of Gastronomy Experiences as a Mechanism to Promote the Local Economy

Authors: Marianys Fernandez

Abstract:

Gastronomic experiences generate a positive impact on the dynamization of the economy when they are managed in a sustainable manner, given that they value the identity of the destination, strengthen cooperation between stakeholders in the sector, contribute to the preservation of gastronomic heritage, and encourage the implementation of competitive and sustainable public policies. Having as its main aim the analysis of sustainable management of gastronomic experiences, this study analyses different elements associated with the promotion of the local economy. For this purpose, a systematic literature review was carried out to identify, select, synthesise, and evaluate the studies that respond to the research objectives in order to select more reliable articles for research and reduce the potential for bias within the review of literature. To obtain reliable, updated and relevant sources for scientific research, the Web of Science and Scopus databases were used, taking into account the following key words: (1) experiential tourism, (2) gastronomy experience, (3) sustainable destination management, (4) sustainable gastronomy, (5) sustainable economy, in which we obtained a final list of 76 articles. The analysis of the literature allowed us to identify the most pertinent elements referring to the objective of the study: (a) need for competitive policies in the gastronomic sector to promote sustainable local economic development, (b) incentive for cooperation between stakeholders in the gastronomic sector, to guarantee the competitiveness of the destination, (c) propose sustainable standards in the gastronomic tourism sector that link the local economy. Gastronomic experiences constitute a dynamic element of the local economy and promote sustainable tourism. We can highlight that sustainability is a mechanism for the preservation of regional identity in the gastronomic sector through the valuation of the attributes of gastronomy, promotion of the local economy, strengthening of strategic alliances between the stakeholders of the gastronomic sector and its relevant contribution to the competitiveness of the destination. The theoretical implications of the study are focused on suggesting planning, management, and policy criteria to promote the sustainable management of gastronomic experiences in order to promote the local economy. In the practical context, research integrates different approaches, tools, and methods to encourage the active participation of local actors in the promotion of the local economy through the sustainable management of gastronomic tourism.

Keywords: experiential tourism, gastronomy experience, sustainable destination management, sustainable economy, sustainable gastronomy

Procedia PDF Downloads 74
605 Cultural Heritage in Rural Areas: Added Value for Agro-Tourism Development

Authors: Djurdjica Perovic, Sanja Pekovic, Tatjana Stanovcic, Jovana Vukcevic

Abstract:

Tourism development in rural areas calls for a discussion of strategies that would attract more tourists. Several scholars argue that rural areas may become more attractive to tourists by leveraging their cultural heritage. The present paper explores the development of sustainable heritage tourism practices in transitional societies of the Western Balkans, specifically targeting Montenegrin rural areas. It addresses the sustainable tourism as a shift in business paradigm, enhancing the centrality of the host community, fostering the encounters with local culture, customs and heritage and minimizing the environmental and social impact. Disseminating part of the results of the interdisciplinary KATUN project, the paper explores the diversification of economic activities related to the cultural heritage of katuns (temporary settlements in Montenegrin mountainous regions where the agricultural households stay with livestock during the summer season) through sustainable agro-tourism. It addresses the role of heritage tourism in creating more dynamic economy of under-developed mountain areas, new employment opportunities, sources of income for the local community and more balanced regional development, all based on the principle of sustainability. Based on the substantial field research (including interviews with over 50 households and tourists, as well as the number of stakeholders such as relevant Ministries, business communities and media representatives), the paper analyses the strategies employed in raising the awareness and katun-sensitivity of both national and international tourists and stimulating their interest in sustainable agriculture, rural tourism and cultural heritage of Montenegrin mountain regions. Studying the phenomena of responsible tourism and tourists’ consumerist consciousness in Montenegro through development of katuns should allow evaluating stages of sustainability and cultural heritage awareness, closely intertwined with the EU integration processes in the country. Offering deeper insight at the relationship between rural tourism, sustainable agriculture and cultural heritage, the paper aims to understand if cultural heritage of the area is valuable for agro-tourism development and in which context.

Keywords: heritage tourism, sustainable tourism, added value, Montenegro

Procedia PDF Downloads 329
604 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit

Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek

Abstract:

In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.

Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage

Procedia PDF Downloads 268
603 Rheometer Enabled Study of Tissue/biomaterial Frequency-Dependent Properties

Authors: Polina Prokopovich

Abstract:

Despite the well-established dependence of cartilage mechanical properties on the frequency of the applied load, most research in the field is carried out in either load-free or constant load conditions because of the complexity of the equipment required for the determination of time-dependent properties. These simpler analyses provide a limited representation of cartilage properties thus greatly reducing the impact of the information gathered hindering the understanding of the mechanisms involved in this tissue replacement, development and pathology. More complex techniques could represent better investigative methods, but their uptake in cartilage research is limited by the highly specialised training required and cost of the equipment. There is, therefore, a clear need for alternative experimental approaches to cartilage testing to be deployed in research and clinical settings using more user-friendly and financial accessible devices. Frequency dependent material properties can be determined through rheometry that is an easy to use requiring a relatively inexpensive device; we present how a commercial rheometer can be adapted to determine the viscoelastic properties of articular cartilage. Frequency-sweep tests were run at various applied normal loads on immature, mature and trypsinased (as model of osteoarthritis) cartilage samples to determine the dynamic shear moduli (G*, G′ G″) of the tissues. Moduli increased with increasing frequency and applied load; mature cartilage had generally the highest moduli and GAG depleted samples the lowest. Hydraulic permeability (KH) was estimated from the rheological data and decreased with applied load; GAG depleted cartilage exhibited higher hydraulic permeability than either immature or mature tissues. The rheometer-based methodology developed was validated by the close comparison of the rheometer-obtained cartilage characteristics (G*, G′, G″, KH) with results obtained with more complex testing techniques available in literature. Rheometry is relatively simpler and does not require highly capital intensive machinery and staff training is more accessible; thus the use of a rheometer would represent a cost-effective approach for the determination of frequency-dependent properties of cartilage for more comprehensive and impactful results for both healthcare professional and R&D.

Keywords: tissue, rheometer, biomaterial, cartilage

Procedia PDF Downloads 81
602 Numerical Study of Leisure Home Chassis under Various Loads by Using Finite Element Analysis

Authors: Asem Alhnity, Nicholas Pickett

Abstract:

The leisure home industry is experiencing an increase in sales due to the rise in popularity of staycations. However, there is also a demand for improvements in thermal and structural behaviour from customers. Existing standards and codes of practice outline the requirements for leisure home design. However, there is a lack of expertise in applying Finite Element Analysis (FEA) to complex structures in this industry. As a result, manufacturers rely on standardized design approaches, which often lead to excessively engineered or inadequately designed products. This study aims to address this issue by investigating the impact of the habitation structure on chassis performance in leisure homes. The aim of this research is to comprehensively analyse the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, including both the habitation structure and the chassis, this study seeks to develop a novel framework for designing and analysing leisure homes. The objectives include material reduction, enhancing structural stability, resolving existing design issues, and developing innovative modular and wooden chassis designs. The methodology used in this research is quantitative in nature. The study utilizes FEA to analyse the performance of leisure home chassis under various loads. The analysis procedures involve running the FEA simulations on the numerical model of the leisure home chassis. Different load scenarios are applied to assess the stress and deflection performance of the chassis under various conditions. FEA is a numerical method that allows for accurate analysis of complex systems. The research utilizes flexible mesh sizing to calculate small deflections around doors and windows, with large meshes used for macro deflections. This approach aims to minimize run-time while providing meaningful stresses and deflections. Moreover, it aims to investigate the limitations and drawbacks of the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load. The findings of this study indicate that the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load overlooks the strengthening generated from the habitation structure. By employing FEA on the entire unit, it is possible to optimize stress and deflection performance while achieving material reduction and enhanced structural stability. The study also introduces innovative modular and wooden chassis designs, which show promising weight reduction compared to the existing heavily fabricated lattice chassis. In conclusion, this research provides valuable insights into the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, the study demonstrates the importance of considering the strengthening generated from the habitation structure in chassis design. The research findings contribute to advancements in material reduction, structural stability, and overall performance optimization. The novel framework developed in this study promotes sustainability, cost-efficiency, and innovation in leisure home design.

Keywords: static homes, caravans, motor homes, holiday homes, finite element analysis (FEA)

Procedia PDF Downloads 100
601 Drug Design Modelling and Molecular Virtual Simulation of an Optimized BSA-Based Nanoparticle Formulation Loaded with Di-Berberine Sulfate Acid Salt

Authors: Eman M. Sarhan, Doaa A. Ghareeb, Gabriella Ortore, Amr A. Amara, Mohamed M. El-Sayed

Abstract:

Drug salting and nanoparticle-based drug delivery formulations are considered to be an effective means for rendering the hydrophobic drugs’ nano-scale dispersion in aqueous media, and thus circumventing the pitfalls of their poor solubility as well as enhancing their membrane permeability. The current study aims to increase the bioavailability of quaternary ammonium berberine through acid salting and biodegradable bovine serum albumin (BSA)-based nanoparticulate drug formulation. Berberine hydroxide (BBR-OH) that was chemically synthesized by alkalization of the commercially available berberine hydrochloride (BBR-HCl) was then acidified to get Di-berberine sulfate (BBR)₂SO₄. The purified crystals were spectrally characterized. The desolvation technique was optimized for the preparation of size-controlled BSA-BBR-HCl, BSA-BBR-OH, and BSA-(BBR)₂SO₄ nanoparticles. Particle size, zeta potential, drug release, encapsulation efficiency, Fourier transform infrared spectroscopy (FTIR), tandem MS-MS spectroscopy, energy-dispersive X-ray spectroscopy (EDX), scanning and transmitting electron microscopic examination (SEM, TEM), in vitro bioactivity, and in silico drug-polymer interaction were determined. BSA (PDB ID; 4OR0) protonation state at different pH values was predicted using Amber12 molecular dynamic simulation. Then blind docking was performed using Lamarkian genetic algorithm (LGA) through AutoDock4.2 software. Results proved the purity and the size-controlled synthesis of berberine-BSA-nanoparticles. The possible binding poses, hydrophobic and hydrophilic interactions of berberine on BSA at different pH values were predicted. Antioxidant, anti-hemolytic, and cell differentiated ability of tested drugs and their nano-formulations were evaluated. Thus, drug salting and the potentially effective albumin berberine nanoparticle formulations can be successfully developed using a well-optimized desolvation technique and exhibiting better in vitro cellular bioavailability.

Keywords: berberine, BSA, BBR-OH, BBR-HCl, BSA-BBR-HCl, BSA-BBR-OH, (BBR)₂SO₄, BSA-(BBR)₂SO₄, FTIR, AutoDock4.2 Software, Lamarkian genetic algorithm, SEM, TEM, EDX

Procedia PDF Downloads 174
600 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 354
599 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 297
598 Investigation of Turbulent Flow in a Bubble Column Photobioreactor and Consequent Effects on Microalgae Cultivation Using Computational Fluid Dynamic Simulation

Authors: Geetanjali Yadav, Arpit Mishra, Parthsarathi Ghosh, Ramkrishna Sen

Abstract:

The world is facing problems of increasing global CO2 emissions, climate change and fuel crisis. Therefore, several renewable and sustainable energy alternatives should be investigated to replace non-renewable fuels in future. Algae presents itself a versatile feedstock for the production of variety of fuels (biodiesel, bioethanol, bio-hydrogen etc.) and high value compounds for food, fodder, cosmetics and pharmaceuticals. Microalgae are simple microorganisms that require water, light, CO2 and nutrients for growth by the process of photosynthesis and can grow in extreme environments, utilize waste gas (flue gas) and waste waters. Mixing, however, is a crucial parameter within the culture system for the uniform distribution of light, nutrients and gaseous exchange in addition to preventing settling/sedimentation, creation of dark zones etc. The overarching goal of the present study is to improve photobioreactor (PBR) design for enhancing dissolution of CO2 from ambient air (0.039%, v/v), pure CO2 and coal-fired flue gas (10 ± 2%) into microalgal PBRs. Computational fluid dynamics (CFD), a state-of-the-art technique has been used to solve partial differential equations with turbulence closure which represents the dynamics of fluid in a photobioreactor. In this paper, the hydrodynamic performance of the PBR has been characterized and compared with that of the conventional bubble column PBR using CFD. Parameters such as flow rate (Q), mean velocity (u), mean turbulent kinetic energy (TKE) were characterized for each experiment that was tested across different aeration schemes. The results showed that the modified PBR design had superior liquid circulation properties and gas-liquid transfer that resulted in creation of uniform environment inside PBR as compared to conventional bubble column PBR. The CFD technique has shown to be promising to successfully design and paves path for a future research in order to develop PBRs which can be commercially available for scale-up microalgal production.

Keywords: computational fluid dynamics, microalgae, bubble column photbioreactor, flue gas, simulation

Procedia PDF Downloads 231
597 Urban Meetings: Graphic Analysis of the Public Space in a Cultural Building from São Paulo

Authors: Thalita Carvalho Martins de Castro, Núbia Bernardi

Abstract:

Currently, studies evidence that our cities are portraits of social relations. In the midst of so many segregations, cultural buildings emerge as a place to assemble collective activities and expressions. Through theater, exhibitions, educational workshops, libraries, the architecture approaches human relations and seeks to propose meeting places. The purpose of this research is to deepen the discussions about the contributions of cultural buildings in the use of the spaces of the contemporary city, based on the data and measure collected in the master's research in progress. The graphic analysis of the insertion of contemporary cultural buildings seeks to highlight the social use of space. The urban insertions of contemporary cultural buildings in the city of São Paulo (Brazil) will be analyzed to understand the relations between the architectural form and its audience. The collected data describe a dynamic of flows and the permanence in the use of these spaces, indicating the contribution of the cultural buildings, associated with artistic production, in the dynamics of urban spaces and the social modifications of their milieu. Among the case studies, the research in development is based on the registration and graphic analysis of the Praça das Artes (2012) building located in the historical central region of the city, which after a long period of great degradation undergoes a current redevelopment. The choice of this building was based on four parameters, both on the architectural scale and on the urban scale: urban insertion, local impact, cultural production and a mix of uses. For the analysis will be applied two methodologies of graphic analysis, one with diagrams accompanied by texts and another with the active analysis for open space projects using complementary graphic methodologies, with maps, plants, info-graphics, perspectives, time-lapse videos and analytical tables. This research aims to reinforce the debates between the methodologies of form-use spaces and visual synthesis applied in cultural buildings, in order that new projects can structure public spaces as catalysts for social use, generating improvements in the daily life of its users and in the cities where they are inserted.

Keywords: cultural buildings, design methodologies, graphic analysis, public spaces

Procedia PDF Downloads 306
596 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 160
595 Mitigation of Risk Management Activities towards Accountability into Microfinance Environment: Malaysian Case Study

Authors: Nor Azlina A. Rahman, Jamaliah Said, Salwana Hassan

Abstract:

Prompt changes in global business environment, such as passionate competition, managerial/operational, changing governmental regulation and innovation in technology have significant impacts on the organizations. At present, global business environment demands for more proactive institutions on microfinance to provide an opportunity for the business success. Microfinance providers in Malaysia still accelerate its activities of funding by cash and cheque. These institutions are at high risk as the paper-based system is deemed to be slow and prone to human error, as well as requiring a major annual reconciliation process. The global transformation of financial services, growing involvement of technology, innovation and new business activities had progressively made risk management profile to be more subjective and diversified. The persistent, complex and dynamic nature of risk management activities in the institutions arise due to highly automated advancements of technology. This may thus manifest in a variety of ways throughout the financial services sector. This study seeks out to examine current operational risks management being experienced by microfinance providers in Malaysia; investigate the process of current practices on facilitator control factor mechanisms, and explore how the adoption of technology, innovation and use of management accounting practices would affect the risk management process of operation system in microfinance providers in Malaysia. A case study method was employed in this study. The case study also need to find that the vital past role of management accounting will be used for mitigation of risk management activities towards accountability as an information or guideline to microfinance provider. An empirical element obtainable with qualitative method is needed in this study, where multipart and in-depth information are essential to understand the issues of these institution phenomena. This study is expected to propose a theoretical model for implementation of technology, innovation and management accounting practices into the system of operation to improve internal control and subsequently lead to mitigation of risk management activities among microfinance providers to be more successful.

Keywords: microfinance, accountability, operational risks, management accounting practices

Procedia PDF Downloads 438
594 Numerical Investigation of Effect of Throat Design on the Performance of a Rectangular Ramjet Intake

Authors: Subrat Partha Sarathi Pattnaik, Rajan N.K.S.

Abstract:

Integrated rocket ramjet engines are highly suitable for long range missile applications. Designing the fixed geometry intakes for such missiles that can operate efficiently over a range of operating conditions is a highly challenging task. Hence, the present study aims to evaluate the effect of throat design on the performance of a rectangular mixed compression intake for operation in the Mach number range of 1.8 – 2.5. The analysis has been carried out at four different Mach numbers of 1.8, 2, 2.2, 2.5 and two angle-of-attacks of +5 and +10 degrees. For the throat design, three different throat heights have been considered, one corresponding to a 3- external shock design and two heights corresponding to a 2-external shock design leading to different internal contraction ratios. The on-design Mach number for the study is M 2.2. To obtain the viscous flow field in the intake, the theoretical designs have been considered for computational fluid dynamic analysis. For which Favre averaged Navier- Stokes (FANS) equations with two equation SST k-w model have been solved. The analysis shows that for zero angle of attack at on-design and high off-design Mach number operations the three-ramp design leads to a higher total pressure recovery (TPR) compared to the two-ramp design at both contraction ratios maintaining same mass flow ratio (MFR). But at low off-design Mach numbers the total pressure shows an opposite trend that is maximum for the two-ramp low contraction ratio design due to lower shock loss across the external shocks similarly the MFR is higher for low contraction ratio design as the external ramp shocks move closer to the cowl. At both the angle of attack conditions and complete range of Mach numbers the total pressure recovery and mass flow ratios are highest for two ramp low contraction design due to lower stagnation pressure loss across the detached bow shock formed at the ramp and lower mass spillage. Hence, low contraction design is found to be suitable for higher off-design performance.

Keywords: internal contraction ratio, mass flow ratio, mixed compression intake, performance, supersonic flows

Procedia PDF Downloads 108
593 The Effects of Extreme Precipitation Events on Ecosystem Services

Authors: Szu-Hua Wang, Yi-Wen Chen

Abstract:

Urban ecosystems are complex coupled human-environment systems. They contain abundant natural resources for producing natural assets and attract urban assets to consume natural resources for urban development. Urban ecosystems provide several ecosystem services, including provisioning services, regulating services, cultural services, and supporting services. Rapid global climate change makes urban ecosystems and their ecosystem services encountering various natural disasters. Lots of natural disasters have occurred around the world under the constant changes in the frequency and intensity of extreme weather events in the past two decades. In Taiwan, hydrological disasters have been paid more attention due to the potential high sensitivity of Taiwan’s cities to climate change, and it impacts. However, climate change not only causes extreme weather events directly but also affects the interactions among human, ecosystem services and their dynamic feedback processes indirectly. Therefore, this study adopts a systematic method, solar energy synthesis, based on the concept of the eco-energy analysis. The Taipei area, the most densely populated area in Taiwan, is selected as the study area. The changes of ecosystem services between 2015 and Typhoon Soudelor have been compared in order to investigate the impacts of extreme precipitation events on ecosystem services. The results show that the forest areas are the largest contributions of energy to ecosystem services in the Taipei area generally. Different soil textures of different subsystem have various upper limits of water contents or substances. The major contribution of ecosystem services of the study area is natural hazard regulation provided by the surface water resources areas. During the period of Typhoon Soudelor, the freshwater supply in the forest areas had become the main contribution. Erosion control services were the main ecosystem service affected by Typhoon Soudelor. The second and third main ecosystem services were hydrologic regulation and food supply. Due to the interactions among ecosystem services, fresh water supply, water purification, and waste treatment had been affected severely.

Keywords: ecosystem, extreme precipitation events, ecosystem services, solar energy synthesis

Procedia PDF Downloads 148
592 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 369
591 In Silico Screening, Identification and Validation of Cryptosporidium hominis Hypothetical Protein and Virtual Screening of Inhibitors as Therapeutics

Authors: Arpit Kumar Shrivastava, Subrat Kumar, Rajani Kanta Mohapatra, Priyadarshi Soumyaranjan Sahu

Abstract:

Computational approaches to predict structure, function and other biological characteristics of proteins are becoming more common in comparison to the traditional methods in drug discovery. Cryptosporidiosis is a major zoonotic diarrheal disease particularly in children, which is caused primarily by Cryptosporidium hominis and Cryptosporidium parvum. Currently, there are no vaccines for cryptosporidiosis and recommended drugs are not effective. With the availability of complete genome sequence of C. hominis, new targets have been recognized for the development of effective and better drugs and/or vaccines. We identified a unique hypothetical epitopic protein in C. hominis genome through BLASTP analysis. A 3D model of the hypothetical protein was generated using I-Tasser server through threading methodology. The quality of the model was validated through Ramachandran plot by PROCHECK server. The functional annotation of the hypothetical protein through DALI server revealed structural similarity with human Transportin 3. Phylogenetic analysis for this hypothetical protein also showed C. hominis hypothetical protein (CUV04613) was the closely related to human transportin 3 protein. The 3D protein model is further subjected to virtual screening study with inhibitors from the Zinc Database by using Dock Blaster software. Docking study reported N-(3-chlorobenzyl) ethane-1,2-diamine as the best inhibitor in terms of docking score. Docking analysis elucidated that Leu 525, Ile 526, Glu 528, Glu 529 are critical residues for ligand–receptor interactions. The molecular dynamic simulation was done to access the reliability of the binding pose of inhibitor and protein complex using GROMACS software at 10ns time point. Trajectories were analyzed at each 2.5 ns time interval, among which, H-bond with LEU-525 and GLY- 530 are significantly present in MD trajectories. Furthermore, antigenic determinants of the protein were determined with the help of DNA Star software. Our study findings showed a great potential in order to provide insights in the development of new drug(s) or vaccine(s) for control as well as prevention of cryptosporidiosis among humans and animals.

Keywords: cryptosporidium hominis, hypothetical protein, molecular docking, molecular dynamics simulation

Procedia PDF Downloads 365
590 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 172
589 Reduction Shrinkage of Concrete without Use Reinforcement

Authors: Martin Tazky, Rudolf Hela, Lucia Osuska, Petr Novosad

Abstract:

Concrete’s volumetric changes are natural process caused by silicate minerals’ hydration. These changes can lead to cracking and subsequent destruction of cementitious material’s matrix. In most cases, cracks can be assessed as a negative effect of hydration, and in all cases, they lead to an acceleration of degradation processes. Preventing the formation of these cracks is, therefore, the main effort. Once of the possibility how to eliminate this natural concrete shrinkage process is by using different types of dispersed reinforcement. For this application of concrete shrinking, steel and polymer reinforcement are preferably used. Despite ordinarily used reinforcement in concrete to eliminate shrinkage it is possible to look at this specific problematic from the beginning by itself concrete mix composition. There are many secondary raw materials, which are helpful in reduction of hydration heat and also with shrinkage of concrete during curing. The new science shows the possibilities of shrinkage reduction also by the controlled formation of hydration products, which could act by itself morphology as a traditionally used dispersed reinforcement. This contribution deals with the possibility of controlled formation of mono- and tri-sulfate which are considered like degradation minerals. Mono- and tri- sulfate's controlled formation in a cementitious composite can be classified as a self-healing ability. Its crystal’s growth acts directly against the shrinking tension – this reduces the risk of cracks development. Controlled formation means that these crystals start to grow in the fresh state of the material (e.g. concrete) but stop right before it could cause any damage to the hardened material. Waste materials with the suitable chemical composition are very attractive precursors because of their added value in the form of landscape pollution’s reduction and, of course, low cost. In this experiment, the possibilities of using the fly ash from fluidized bed combustion as a mono- and tri-sulphate formation additive were investigated. The experiment itself was conducted on cement paste and concrete and specimens were subjected to a thorough analysis of physicomechanical properties as well as microstructure from the moment of mixing up to 180 days. In cement composites, were monitored the process of hydration and shrinkage. In a mixture with the used admixture of fluidized bed combustion fly ash, possible failures were specified by electronic microscopy and dynamic modulus of elasticity. The results of experiments show the possibility of shrinkage concrete reduction without using traditionally dispersed reinforcement.

Keywords: shrinkage, monosulphates, trisulphates, self-healing, fluidized fly ash

Procedia PDF Downloads 186
588 Investigation on Development of Pv and Wind Power with Hydro Pumped Storage to Increase Renewable Energy Penetration: A Parallel Analysis of Taiwan and Greece

Authors: Robel Habtemariam

Abstract:

Globally, wind energy and photovoltaics (PV) solar energy are among the leading renewable energy sources (RES) in terms of installed capacity. In order to increase the contribution of RES to the power supply system, large scale energy integration is required, mainly due to wind energy and PV. In this paper, an investigation has been made on the electrical power supply systems of Taiwan and Greece in order to integrate high level of wind and photovoltaic (PV) to increase the penetration of renewable energy resources. Currently, both countries heavily depend on fossil fuels to meet the demand and to generate adequate electricity. Therefore, this study is carried out to look into the two cases power supply system by developing a methodology that includes major power units. To address the analysis, an approach for simulation of power systems is formulated and applied. The simulation is based on the non-dynamic analysis of the electrical system. This simulation results in calculating the energy contribution of different types of power units; namely the wind, PV, non-flexible and flexible power units. The calculation is done for three different scenarios (2020, 2030, & 2050), where the first two scenarios are based on national targets and scenario 2050 is a reflection of ambitious global targets. By 2030 in Taiwan, the input of the power units is evaluated as 4.3% (wind), 3.7% (PV), 65.2 (non-flexible), 25.3% (flexible), and 1.5% belongs to hydropower plants. In Greece, much higher renewable energy contribution is observed for the same scenario with 21.7% (wind), 14.3% (PV), 38.7% (non-flexible), 14.9% (flexible), and 10.3% (hydro). Moreover, it examines the ability of the power systems to deal with the variable nature of the wind and PV generation. For this reason, an investigation has also been done on the use of the combined wind power with pumped storage systems (WPS) to enable the system to exploit the curtailed wind energy & surplus PV and thus increase the wind and PV installed capacity and replace the peak supply by conventional power units. Results show that the feasibility of pumped storage can be justified in the high scenario (that is the scenario of 2050) of RES integration especially in the case of Greece.

Keywords: large scale energy integration, photovoltaics solar energy, pumped storage systems, renewable energy sources

Procedia PDF Downloads 277
587 Co-Synthesis of Exopolysaccharides and Polyhydroxyalkanoates Using Waste Streams: Solid-State Fermentation as an Alternative Approach

Authors: Laura Mejias, Sandra Monteagudo, Oscar Martinez-Avila, Sergio Ponsa

Abstract:

Bioplastics are gaining attention as potential substitutes of conventional fossil-derived plastics and new components of specialized applications in different industries. Besides, these constitute a sustainable alternative since they are biodegradable and can be obtained starting from renewable sources. Thus, agro-industrial wastes appear as potential substrates for bioplastics production using microorganisms, considering they are a suitable source for nutrients, low-cost, and available worldwide. Therefore, this approach contributes to the biorefinery and circular economy paradigm. The present study assesses the solid-state fermentation (SSF) technology for the co-synthesis of exopolysaccharides (EPS) and polyhydroxyalkanoates (PHA), two attractive biodegradable bioplastics, using the leftover of the brewery industry brewer's spent grain (BSG). After an initial screening of diverse PHA-producer bacteria, it was found that Burkholderia cepacia presented the highest EPS and PHA production potential via SSF of BSG. Thus, B. cepacia served to identify the most relevant aspects affecting the EPS+PHA co-synthesis at a lab-scale (100g). Since these are growth-dependent processes, they were monitored online through oxygen consumption using a dynamic respirometric system, but also quantifying the biomass production (gravimetric) and the obtained products (EtOH precipitation for EPS and solid-liquid extraction coupled with GC-FID for PHA). Results showed that B. cepacia has grown up to 81 mg per gram of dry BSG (gDM) at 30°C after 96 h, representing up to 618 times higher than the other tested strains' findings. Hence, the crude EPS production was 53 mg g-1DM (2% carbohydrates), but purity reached 98% after a dialysis purification step. Simultaneously, B. cepacia accumulated up to 36% (dry basis) of the produced biomass as PHA, mainly composed of polyhydroxybutyrate (P3HB). The maximum PHA production was reached after 48 h with 12.1 mg g⁻¹DM, representing threefold the levels previously reported using SSF. Moisture content and aeration strategy resulted in the most significant variables affecting the simultaneous production. Results show the potential of co-synthesis via SSF as an attractive alternative to enhance bioprocess feasibility for obtaining these bioplastics in residue-based systems.

Keywords: bioplastics, brewer’s spent grain, circular economy, solid-state fermentation, waste to product

Procedia PDF Downloads 143
586 Simon Says: What Should I Study?

Authors: Fonteyne Lot

Abstract:

SIMON (Study capacities and Interest Monitor is a freely accessible online self-assessment tool that allows secondary education pupils to evaluate their interests and capacities in order to choose a post-secondary major that maximally suits their potential. The tool consists of two broad domains that correspond with two general questions pupils ask: 'What study fields interest me?' and 'Am I capable to succeed in this field of study?'. The first question is addressed by a RIASEC-type interest inventory that links personal interests to post-secondary majors. Pupils are provided with a personal profile and an overview of majors with their degree of congruence. The output is dynamic: respondents can manipulate their score and they can compare their results to the profile of all fields of study. That way they are stimulated to explore the broad range of majors. To answer whether pupils are capable of succeeding in a preferred major, a battery of tests is provided. This battery comprises a range of factors that are predictive of academic success. Traditional predictors such as (educational) background and cognitive variables (mathematical and verbal skills) are included. Moreover, non-cognitive predictors of academic success (such as 'motivation', 'test anxiety', 'academic self-efficacy' and 'study skills') are assessed. These non-cognitive factors are generally not included in admission decisions although research shows they are incrementally predictive of success and are less discriminating. These tests inform pupils on potential causes of success and failure. More important, pupils receive their personal chances of success per major. These differential probabilities are validated through the underlying research on academic success of students. For example, the research has shown that we can identify 22 % of the failing students in psychology and educational sciences. In this group, our prediction is 95% accurate. SIMON leads more students to a suitable major which in turn alleviates student success and retention. Apart from these benefits, the instrument grants insight into risk factors of academic failure. It also supports and fosters the development of evidence-based remedial interventions and therefore gives way to a more efficient use of means.

Keywords: academic success, online self-assessment, student retention, vocational choice

Procedia PDF Downloads 403
585 The Need for a One Health and Welfare Approach to Animal Welfare in Industrial Animal Farming

Authors: Clinton Adas

Abstract:

Antibiotic resistance has been identified by the World Health Organisation as a real possibility for the 21st Century. While many factors contribute to this, one of the more significant is industrial animal farming and its effect on the food chain and environment. Livestock consumes a significant portion of antibiotics sold globally, and these are used to make animals grow faster for profit purposes, to prevent illness caused by inhumane living conditions, and to treat disease when it breaks out. Many of these antibiotics provide little benefit to animals, and most are the same as those used by humans - including those deemed critical to human health that should therefore be used sparingly. Antibiotic resistance contributes to growing numbers of illnesses and death in humans, and the excess usage of these medications results in waste that enters the environment and is harmful to many ecological processes. This combination of antimicrobial resistance and environmental degradation furthermore harms the economic well-being and prospects of many. Using an interdisciplinary approach including medical, environmental, economic, and legal studies, the paper evaluates the dynamic between animal welfare and commerce and argues that while animal welfare is not of great concern to many, this approach is ultimately harming human welfare too. It is, however, proposed that both could be addressed under a One Health and Welfare approach, as we cannot continue to ignore the linkages between animals, the environment, and people. The evaluation of industrial animal farming is therefore considered through three aspects – the environmental impact, which is measured by pollution that causes environmental degradation; the human impact, which is measured by the rise of illnesses from pollution and antibiotics resistance; and the economic impact, which is measured through costs to the health care system and the financial implications of industrial farming on the economic well-being of many. These three aspects are considered in light of the Sustainable Development Goals that provide additional tangible metrics to evidence the negative impacts. While the research addresses the welfare of farmed animals, there is potential for these principles to be extrapolated into other contexts, including wildlife and habitat protection. It must be noted that while the question of animal rights in industrial animal farming is acknowledged and of importance, this is a separate matter that is not addressed here.

Keywords: animal and human welfare, industrial animal farming, one health and welfare, sustainable development goals

Procedia PDF Downloads 84
584 Factors That Determine International Competitiveness of Agricultural Products in Latin America 1990-2020

Authors: Oluwasefunmi Eunice Irewole, Enrique Armas Arévalos

Abstract:

Agriculture has played a crucial role in the economy and the development of many countries. Moreover, the basic needs for human survival are; food, shelter, and cloth are link on agricultural production. Most developed countries see that agriculture provides them with food and raw materials for different goods such as (shelter, medicine, fuel and clothing) which has led to an increase in incomes, livelihoods and standard of living. This study aimed at analysing the relationship between International competitiveness of agricultural products, with the area, fertilizer, labour force, economic growth, foreign direct investment, exchange rate and inflation rate in Latin America during the period of 1991-to 2019. In this study, panel data econometric methods were used, as well as cross-section dependence (Pesaran test), unit root (cross-section Augumented Dickey Fuller and Cross-sectional Im, Pesaran, and Shin tests), cointergration (Pedroni and Fisher-Johansen tests), and heterogeneous causality (Pedroni and Fisher-Johansen tests) (Hurlin and Dumitrescu test). The results reveal that the model has cross-sectional dependency and that they are integrated at one I. (1). The "fully modified OLS and dynamic OLS estimators" were used to examine the existence of a long-term relationship, and it was found that a long-term relationship existed between the selected variables. The study revealed a positive significant relationship between International Competitiveness of the agricultural raw material and area, fertilizer, labour force, economic growth, and foreign direct investment, while international competitiveness has a negative relationship with the advantages of the exchange rate and inflation. The economy policy recommendations deducted from this investigation is that Foreign Direct Investment and the labour force have a positive contribution to the increase of International Competitiveness of agricultural products.

Keywords: revealed comparative advantage, agricultural products, area, fertilizer, economic growth, granger causality, panel unit root

Procedia PDF Downloads 100
583 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 225
582 The Volume–Volatility Relationship Conditional to Market Efficiency

Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese

Abstract:

The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.

Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent

Procedia PDF Downloads 78
581 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 92