Search results for: performance and quality
4572 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 4504571 Effect of Sensory Manipulations on Human Joint Stiffness Strategy and Its Adaptation for Human Dynamic Stability
Authors: Aizreena Azaman, Mai Ishibashi, Masanori Ishizawa, Shin-Ichiroh Yamamoto
Abstract:
Sensory input plays an important role to human posture control system to initiate strategy in order to counterpart any unbalance condition and thus, prevent fall. In previous study, joint stiffness was observed able to describe certain issues regarding to movement performance. But, correlation between balance ability and joint stiffness is still remains unknown. In this study, joint stiffening strategy at ankle and hip were observed under different sensory manipulations and its correlation with conventional clinical test (Functional Reach Test) for balance ability was investigated. In order to create unstable condition, two different surface perturbations (tilt up-tilt (TT) down and forward-backward (FB)) at four different frequencies (0.2, 0.4, 0.6 and 0.8 Hz) were introduced. Furthermore, four different sensory manipulation conditions (include vision and vestibular system) were applied to the subject and they were asked to maintain their position as possible. The results suggested that joint stiffness were high during difficult balance situation. Less balance people generated high average joint stiffness compared to balance people. Besides, adaptation of posture control system under repetitive external perturbation also suggested less during sensory limited condition. Overall, analysis of joint stiffening response possible to predict unbalance situation faced by human.Keywords: balance ability, joint stiffness, sensory, adaptation, dynamic
Procedia PDF Downloads 4634570 A Temporal QoS Ontology For ERTMS/ETCS
Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien
Abstract:
Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. Indeed, a user operation, such as adding a new constraint on existing planning constraints can cause temporal inconsistencies, which can lead to system failures. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are given.Keywords: system requirement specification, ERTMS/ETCS, temporal ontologies, domain ontologies
Procedia PDF Downloads 4254569 Identification of Blood Biomarkers Unveiling Early Alzheimer's Disease Diagnosis Through Single-Cell RNA Sequencing Data and Autoencoders
Authors: Hediyeh Talebi, Shokoofeh Ghiam, Changiz Eslahchi
Abstract:
Traditionally, Alzheimer’s disease research has focused on genes with significant fold changes, potentially neglecting subtle but biologically important alterations. Our study introduces an integrative approach that highlights genes crucial to underlying biological processes, regardless of their fold change magnitude. Alzheimer's Single-cell RNA-seq data related to the peripheral blood mononuclear cells (PBMC) was extracted from the Gene Expression Omnibus (GEO). After quality control, normalization, scaling, batch effect correction, and clustering, differentially expressed genes (DEGs) were identified with adjusted p-values less than 0.05. These DEGs were categorized based on cell-type, resulting in four datasets, each corresponding to a distinct cell type. To distinguish between cells from healthy individuals and those with Alzheimer's, an adversarial autoencoder with a classifier was employed. This allowed for the separation of healthy and diseased samples. To identify the most influential genes in this classification, the weight matrices in the network, which includes the encoder and classifier components, were multiplied, and focused on the top 20 genes. The analysis revealed that while some of these genes exhibit a high fold change, others do not. These genes, which may be overlooked by previous methods due to their low fold change, were shown to be significant in our study. The findings highlight the critical role of genes with subtle alterations in diagnosing Alzheimer's disease, a facet frequently overlooked by conventional methods. These genes demonstrate remarkable discriminatory power, underscoring the need to integrate biological relevance with statistical measures in gene prioritization. This integrative approach enhances our understanding of the molecular mechanisms in Alzheimer’s disease and provides a promising direction for identifying potential therapeutic targets.Keywords: alzheimer's disease, single-cell RNA-seq, neural networks, blood biomarkers
Procedia PDF Downloads 704568 Numerical Investigation of the Bio-fouling Roughness Effect on Tidal Turbine
Authors: O. Afshar
Abstract:
Unlike other renewable energy sources, tidal current energy is an extremely reliable, predictable and continuous energy source as the current pattern and speed can be predicted throughout the year. A key concern associated with tidal turbines is their long-term reliability when operating in the hostile marine environment. Bio-fouling changes the physical shape and roughness of turbine components, hence altering the overall turbine performance. This paper seeks to employ Computational Fluid Dynamics (CFD) method to quantify the effects of this problem based on the obtained flow field information. The simulation is carried out on a NACA 63-618 aerofoil. The Reynolds Averaged Navier-Stokes (RANS) equations with Shear Stress Transport (SST) turbulent model are used to simulate the flow around the model. Different levels of fouling are studied on 2D aerofoil surface with quantified fouling height and density. In terms of lift and drag coefficient results, numerical results show good agreement with the experiment which was carried out in wind tunnel. Numerical results of research indicate that an increase in fouling thickness causes an increase in drag coefficient and a reduction in lift coefficient. Moreover, pressure gradient gradually becomes adverse as height of fouling increases. In addition, result by turbulent kinetic energy contour reveals it increases with fouling height and it extends into wake due to flow separation.Keywords: tidal energy, lift coefficient, drag coefficient, roughness
Procedia PDF Downloads 3874567 Multivariate Rainfall Disaggregation Using MuDRain Model: Malaysia Experience
Authors: Ibrahim Suliman Hanaish
Abstract:
Disaggregation daily rainfall using stochastic models formulated based on multivariate approach (MuDRain) is discussed in this paper. Seven rain gauge stations are considered in this study for different distances from the referred station starting from 4 km to 160 km in Peninsular Malaysia. The hourly rainfall data used are covered the period from 1973 to 2008 and July and November months are considered as an example of dry and wet periods. The cross-correlation among the rain gauges is considered for the available hourly rainfall information at the neighboring stations or not. This paper discussed the applicability of the MuDRain model for disaggregation daily rainfall to hourly rainfall for both sources of cross-correlation. The goodness of fit of the model was based on the reproduction of fitting statistics like the means, variances, coefficients of skewness, lag zero cross-correlation of coefficients and the lag one auto correlation of coefficients. It is found the correlation coefficients based on extracted correlations that was based on daily are slightly higher than correlations based on available hourly rainfall especially for neighboring stations not more than 28 km. The results showed also the MuDRain model did not reproduce statistics very well. In addition, a bad reproduction of the actual hyetographs comparing to the synthetic hourly rainfall data. Mean while, it is showed a good fit between the distribution function of the historical and synthetic hourly rainfall. These discrepancies are unavoidable because of the lowest cross correlation of hourly rainfall. The overall performance indicated that the MuDRain model would not be appropriate choice for disaggregation daily rainfall.Keywords: rainfall disaggregation, multivariate disaggregation rainfall model, correlation, stochastic model
Procedia PDF Downloads 5234566 A Dynamic Solution Approach for Heart Disease Prediction
Authors: Walid Moudani
Abstract:
The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the coronary heart disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts’ knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.Keywords: multi-classifier decisions tree, features reduction, dynamic programming, rough sets
Procedia PDF Downloads 4124565 Thermal Analysis and Experimental Procedure of Integrated Phase Change Material in a Storage Tank
Authors: Chargui Ridha, Agrebi Sameh
Abstract:
The integration of phase change materials (PCM) for the storage of thermal energy during the period of sunshine before being released during the night is a complement of free energy to improve the system formed by a solar collector, tank storage, and a heat exchanger. This paper is dedicated to the design of a thermal storage tank based on a PCM-based heat exchanger. The work is divided into two parts: an experimental part using paraffin as PCM was carried out within the Laboratory of Thermal Processes of Borj Cedria in order to improve the performance of the system formed by the coupling of a flat solar collector and a thermal storage tank and to subsequently determine the influence of PCM on the whole system. This phase is based on the measurement instrumentation, namely, a differential scanning calorimeter (DSC) and the thermal analyzer (hot disk: HOT DISK) in order to determine the physical properties of the paraffin (PCM), which has been chosen. The second phase involves the detailed design of the PCM heat exchanger, which is incorporated into a thermal storage tank and coupled with a solar air collector installed at the Research and Technology Centre of Energy (CRTEn). A numerical part based on the TRANSYS and Fluent software, as well as the finite volume method, was carried out for the storage reservoir systems in order to determine the temperature distribution in each chosen system.Keywords: phase change materials, storage tank, heat exchanger, flat plate collector
Procedia PDF Downloads 1004564 Effects of Commonly-Used Inorganic Salts on the Morphology and Electrochemical Performance of Carboxylated Cellulose Nanocrystals Doped Polypyrrole Supercapacitors
Authors: Zuxinsun, Samuel Eyley, Yongjian Guo, Reeta Salminen, Wim Thielemans
Abstract:
Polypyrrole(PPy), as one of the most promising pseudocapacitor electrode materials, has attracted large research interest due to its low cost, high electrical conductivity and easy fabrication, limited capacitance, and cycling stability of PPy films hinder their practical applications. In this study, through adding different amounts of KCl into the pyrrole and CNC-COO⁻ system, three-dimensional, porous, and reticular PPy films were electropolymerized at last without the assistance of any template or substrate. Replacing KCl with NaCl, KBr, and NaClO4, the porous PPy films were still obtained rather than relatively dense PPy films which were deposited with pyrrole and CNC-COO⁻ or pyrrole and KCl. The nucleation and growth mechanisms of PPy films were studied in the deposited electrolyte with or without salts to illustrate the evolution of morphology from relatively dense to porous structure. The capacitance of PPy/CNC-COO⁻-Cl-(ClO4-)_0.5 films increased from 160.6 to 183.4 F g⁻¹ at 0.2 A g⁻¹. More importantly, at a high current density of 2.0 A g⁻¹ (20 mA cm⁻²), the PPy/CNC-COO⁻-Cl-(ClO4-)_0.5 films exhibited an excellent capacitance of 125.0 F g⁻¹ (1.19 F cm⁻²), increasing about 203.7 % over PPy/CNC-COO- films. 103.3 % of its initial capacitance was retained after 5000 cycles at 2 A g⁻¹ (20 mA cm⁻²) for the PPy/CNC-COO⁻-Cl-(ClO4-)_0.5 supercapacitor. The analyses reveal that the porous and reticular PPy/CNC-COO⁻-salts films open up more active reaction areas to store charges. The stiff and ribbonlike CNC-COO⁻ as the permanent dopants improve strength and stability of PPy/CNC-COO⁻-salts films. Our demonstration provides a simple and practical way to deposit PPy-based supercapacitors with high capacitance and cycling ability.Keywords: polypyrrole, supercapacitors, cellulose nanocrystals, porous and reticular structure, inorganic salts
Procedia PDF Downloads 704563 Psychosocial Development: The Study of Adaptation and Development and Post-Retirement Satisfaction in Ageing Australians
Authors: Sahar El-Achkar, Mizan Ahmad
Abstract:
Poor adaptation of developmental milestones over the lifespan can significantly impact emotional experiences and Satisfaction with Life (SWL) post-retirement. Thus, it is important to understand how adaptive behaviour over the life course can predict emotional experiences. Broadly emotional experiences are either Positive Affect (PA) or Negative Affect (NA). This study sought to explore the impact of successful adaptation of developmental milestones throughout one’s life on emotional experiences and satisfaction with life following retirement. A cross-sectional self-report survey was completed by 132 Australian retirees between the ages 55 and 70 years. Three hierarchical regression models were fitted, controlling for age and gender, to predict PA, NA, and SWL. The full model predicting PA was statistically significant overall, F (8, 121) = 17.97, p < .001, account for 57% of the variability in PA. Industry/Inferiority were significantly predictive of PA. The full model predicting NA was statistically significant overall, F (8, 121) = 12.00, p < .001, accounting for 51% of the variability in NA. Age and Trust/Mistrust were significantly predictive of NA. The full model predicting NA was statistically significant overall, F (8, 121) = 12.00, p < .001, accounting for 51% of the variability in NA. Age and Trust/Mistrust were significantly predictive of NA. The full model predicting SWL, F (8, 121) = 11.05, p < .001, accounting for 45% of the variability in SWL. Trust/Mistrust and Ego Integrity/Despair were significantly predictive of SWL. A sense of industry post-retirement is important in generating PA. These results highlight that individuals presenting with adaptation and identity issues are likely to present with adjustment challenges and unpleasant emotional experiences post-retirement. This supports the importance of identifying and understanding the benefits of successful adaptation and development throughout the lifespan and its significance for the self-concept. Most importantly, the quality of lives of many may be improved, and the future risk of continued poor emotional experiences and SWL post-retirement may be mitigated. Specifically, the clinical implications of these findings are that they support the promotion of successful adaption over the life course and healthy ageing.Keywords: adaptation, development, negative affect, positive affect, retirement, satisfaction with life
Procedia PDF Downloads 774562 Estimating the Receiver Operating Characteristic Curve from Clustered Data and Case-Control Studies
Authors: Yalda Zarnegarnia, Shari Messinger
Abstract:
Receiver operating characteristic (ROC) curves have been widely used in medical research to illustrate the performance of the biomarker in correctly distinguishing the diseased and non-diseased groups. Correlated biomarker data arises in study designs that include subjects that contain same genetic or environmental factors. The information about correlation might help to identify family members at increased risk of disease development, and may lead to initiating treatment to slow or stop the progression to disease. Approaches appropriate to a case-control design matched by family identification, must be able to accommodate both the correlation inherent in the design in correctly estimating the biomarker’s ability to differentiate between cases and controls, as well as to handle estimation from a matched case control design. This talk will review some developed methods for ROC curve estimation in settings with correlated data from case control design and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using Conditional ROC curves will be demonstrated, to provide appropriate ROC curves for correlated paired data. The proposed approach will use the information about the correlation among biomarker values, producing conditional ROC curves that evaluate the ability of a biomarker to discriminate between diseased and non-diseased subjects in a familial paired design.Keywords: biomarker, correlation, familial paired design, ROC curve
Procedia PDF Downloads 2414561 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems
Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu
Abstract:
In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP
Procedia PDF Downloads 464560 Roof Integrated Photo Voltaic with Air Collection on Glasgow School of Art Campus Building: A Feasibility Study
Authors: Rosalie Menon, Angela Reid
Abstract:
Building integrated photovoltaic systems with air collectors (hybrid PV-T) have proved successful however there are few examples of their application in the UK. The opportunity to pull heat from behind the PV system to contribute to a building’s heating system is an efficient use of waste energy and its potential to improve the performance of the PV array is well documented. As part of Glasgow School of Art’s estate expansion, the purchase and redevelopment of an existing 1950’s college building was used as a testing vehicle for the hybrid PV-T system as an integrated element of the upper floor and roof. The primary objective of the feasibility study was to determine if hybrid PV-T was technically and financially suitable for the refurbished building. The key consideration was whether the heat recovered from the PV panels (to increase the electrical efficiency) can be usefully deployed as a heat source within the building. Dynamic thermal modelling (IES) and RetScreen Software were used to carry out the feasibility study not only to simulate overshadowing and optimise the PV-T locations but also to predict the atrium temperature profile; predict the air load for the proposed new 4 No. roof mounted air handling units and to predict the dynamic electrical efficiency of the PV element. The feasibility study demonstrates that there is an energy reduction and carbon saving to be achieved with each hybrid PV-T option however the systems are subject to lengthy payback periods and highlights the need for enhanced government subsidy schemes to reward innovation with this technology in the UK.Keywords: building integrated, photovoltatic thermal, pre-heat air, ventilation
Procedia PDF Downloads 1744559 Gradient Index Metalens for WLAN Applications
Authors: Akram Boubakri, Fethi Choubeni, Tan Hoa Vuong, Jacques David
Abstract:
The control of electromagnetic waves is a key aim of several researches over the past decade. In this regard, Metamaterials have shown a strong ability to manipulate the electromagnetic waves on a subwavelength scales thanks to its unconventional properties that are not available in natural materials such as negative refraction index, super imaging and invisibility cloaking. Metalenses were used to avoid some drawbacks presented by conventional lenses since focusing with conventional lenses suffered from the limited resolution because they were only able to focus the propagating wave component. Nevertheless, Metalenses were able to go beyond the diffraction limit and enhance the resolution not only by collecting the propagating waves but also by restoring the amplitude of evanescent waves that decay rapidly when going far from the source and that contains the finest details of the image. Metasurfaces have many mechanical advantages over three-dimensional metamaterial structures especially the ease of fabrication and a smaller required volume. Those structures have been widely used for antenna performance improvement and to build flat metalenses. In this work, we showed that a well-designed metasurface lens operating at the frequency of 5.9GHz, has efficiently enhanced the radiation characteristics of a patch antenna and can be used for WLAN applications (IEEE 802.11 a). The proposed metasurface lens is built with a geometrically modified unit cells which lead to a change in the response of the lens at different position and allow the control of the wavefront beam of the incident wave thanks to the gradient refractive index.Keywords: focusing, gradient index, metasurface, metalens, WLAN Applications
Procedia PDF Downloads 2574558 Stun Practices in Swine in the Valle De Aburrá and Animal Welfare
Authors: Natalia Uribe Corrales, Carolina Cano Arroyave, Santiago Henao Villegas
Abstract:
Introduction: Stunning is an important stage in the meat industry due to the repercussions on the characteristics of the carcass. It has been demonstrated that inadequate stun can lead to hematomas, fractures and promote the appearance of pale, soft and exudative meat due to the stress caused in animals. In Colombia, gas narcosis and electrical stunning are the two authorized methods in pigs. Objective: To describe the practices of stunning in the Valle de Aburrá and its relation with animal welfare. Methods: A descriptive cross - sectional study was carried out in Valle de Aburrá slaughterhouses, which were authorized by National Institute for Food and Medicine Surveillance (INVIMA). Variables such as stunning method, presence of vocalization, falls, slips, rhythmic breathing, corneal reflex and attempts to incorporate after stunning, stun time and time between stun and bleeding were analyzed. Results: 225 pigs were analyzed, finding that 50.2% had electrical stun, whose amperage and voltage were 1.23 (A) and 120 (V) respectively; 49.8% of the animals were stunned with CO2 chamber whose concentration was always above 95%, the mean desensitization time was 16.8 seconds (d.e.5.37); the mean time of stunning - bleeding was 47.9 seconds (d.e.13.9); similarly, it was found that 27.1% had vocalizations after stunning; 12% had falls; 10.7% showed rhythmic breathing; 33.3% exhibited corneal reflex; and 10.7% had reincorporation attempts. Conclusions: The methods of stunning used in the Valle de Aburrá, although performed with those permitted by law, are shortcomings in relation to the amperage and voltage used for each type of pig, as well, it is found that welfare animal is being violated to find signology of an inadequate desensitization. It is necessary to promote compliance with the principles of stunning according to Animal Welfare, and keep in mind that in electrical desensitization, the calibration of the equipment must be guaranteed (pressure according to the type of animal or current applied and the position where the electrodes are) and in the narcosis the equipment should be calibrated to ensure proper gas concentration and exposure time.Keywords: animal welfare, pigs, quality of meat, stun methods
Procedia PDF Downloads 2324557 Strategic Policy Formulation to Ensure the Atlantic Forest Regeneration
Authors: Ramon F. B. da Silva, Mateus Batistella, Emilio Moran
Abstract:
Although the existence of two Forest Transition (FT) pathways, the economic development and the forest scarcity, there are many contexts that shape the model of FT observed in each particular region. This means that local conditions, such as relief, soil quality, historic land use/cover, public policies, the engagement of society in compliance with legal regulations, and the action of enforcement agencies, represent dimensions which combined, creates contexts that enable forest regeneration. From this perspective we can understand the regeneration process of native vegetation cover in the Paraíba Valley (Forest Atlantic biome), ongoing since the 1960s. This research analyzed public information, land use/cover maps, environmental public policies, and interviewed 17 stakeholders from the Federal and State agencies, municipal environmental and agricultural departments, civil society, farmers, aiming comprehend the contexts behind the forest regeneration in the Paraíba Valley, Sao Paulo State, Brazil. The first policy to protect forest vegetation was the Forest Code n0 4771 of 1965, but this legislation did not promote the increase of forest, just the control of deforestation, not enough to the Atlantic Forest biome that reached its highest pick of degradation in 1985 (8% of Atlantic Forest remnants). We concluded that the Brazilian environmental legislation acted in a strategic way to promote the increase of forest cover (102% of regeneration between 1985 and 2011) from 1993 when the Federal Decree n0 750 declared the initial and advanced stages of secondary succession protected against any kind of exploitation or degradation ensuring the forest regeneration process. The strategic policy formulation was also observed in the Sao Paulo State law n0 6171 of 1988 that prohibited the use of fire to manage agricultural landscape, triggering a process of forest regeneration in formerly pasture areas.Keywords: forest transition, land abandonment, law enforcement, rural economic crisis
Procedia PDF Downloads 5564556 Morphological Characterization and Gas Permeation of Commercially Available Alumina Membrane
Authors: Ifeyinwa Orakwe, Ngozi Nwogu, Edward Gobina
Abstract:
This work presents experimental results relating to the structural characterization of a commercially available alumina membrane. A γ-alumina mesoporous tubular membrane has been used. Nitrogen adsorption-desorption, scanning electron microscopy and gas permeability test has been carried out on the alumina membrane to characterize its structural features. Scanning electron microscopy (SEM) was used to determine the pore size distribution of the membrane. Pore size, specific surface area and pore size distribution were also determined with the use of the Nitrogen adsorption-desorption instrument. Gas permeation tests were carried out on the membrane using a variety of single and mixed gases. The permeabilities at different pressure between 0.05-1 bar and temperature range of 25-200oC were used for the single and mixed gases: nitrogen (N2), helium (He), oxygen (O2), carbon dioxide (CO2), 14%CO₂/N₂, 60%CO₂/N₂, 30%CO₂/CH4 and 21%O₂/N₂. Plots of flow rate verses pressure were obtained. Results got showed the effect of temperature on the permeation rate of the various gases. At 0.5 bar for example, the flow rate for N2 was relatively constant before decreasing with an increase in temperature, while for O2, it continuously decreased with an increase in temperature. In the case of 30%CO₂/CH4 and 14%CO₂/N₂, the flow rate showed an increase then a decrease with increase in temperature. The effect of temperature on the membrane performance of the various gases is presented and the influence of the trans membrane pressure drop will be discussed in this paper.Keywords: alumina membrane, Nitrogen adsorption-desorption, scanning electron microscopy, gas permeation, temperature
Procedia PDF Downloads 3254555 High Motivational Salient Face Distractors Slowed Target Detection: Evidence from Behavioral Studies
Authors: Rashmi Gupta
Abstract:
Rewarding stimuli capture attention involuntarily as a result of an association process that develops quickly during value learning, referred to as the reward or value-driven attentional capture. It is essential to compare reward with punishment processing to get a full picture of value-based modulation in visual attention processing. Hence, the present study manipulated both valence/value (reward as well as punishment) and motivational salience (probability of an outcome: high vs. low) together. Series of experiments were conducted, and there were two phases in each experiment. In phase 1, participants were required to learn to associate specific face stimuli with a high or low probability of winning or losing points. In the second phase, these conditioned stimuli then served as a distractor or prime in a speeded letter search task. Faces with high versus low outcome probability, regardless of valence, slowed the search for targets (specifically the left visual field target) and suggesting that the costs to performance on non-emotional cognitive tasks were only driven by motivational salience (high vs. loss) associated with the stimuli rather than the valence (gain vs. loss). It also suggests that the processing of motivationally salient stimuli is right-hemisphere biased. Together, results of these studies strengthen the notion that our visual attention system is more sensitive to affected by motivational saliency rather than valence, which termed here as motivational-driven attentional capture.Keywords: attention, distractors, motivational salience, valence
Procedia PDF Downloads 2244554 An Efficient Subcarrier Scheduling Algorithm for Downlink OFDMA-Based Wireless Broadband Networks
Authors: Hassen Hamouda, Mohamed Ouwais Kabaou, Med Salim Bouhlel
Abstract:
The growth of wireless technology made opportunistic scheduling a widespread theme in recent research. Providing high system throughput without reducing fairness allocation is becoming a very challenging task. A suitable policy for resource allocation among users is of crucial importance. This study focuses on scheduling multiple streaming flows on the downlink of a WiMAX system based on orthogonal frequency division multiple access (OFDMA). In this paper, we take the first step in formulating and analyzing this problem scrupulously. As a result, we proposed a new scheduling scheme based on Round Robin (RR) Algorithm. Because of its non-opportunistic process, RR does not take in account radio conditions and consequently it affect both system throughput and multi-users diversity. Our contribution called MORRA (Modified Round Robin Opportunistic Algorithm) consists to propose a solution to this issue. MORRA not only exploits the concept of opportunistic scheduler but also takes into account other parameters in the allocation process. The first parameter is called courtesy coefficient (CC) and the second is called Buffer Occupancy (BO). Performance evaluation shows that this well-balanced scheme outperforms both RR and MaxSNR schedulers and demonstrate that choosing between system throughput and fairness is not required.Keywords: OFDMA, opportunistic scheduling, fairness hierarchy, courtesy coefficient, buffer occupancy
Procedia PDF Downloads 3034553 Comparison of Process Slaughtered on Beef Cattle Based on Level of Cortisol and Fourier Transform Infrared Spectroscopy (FTIR)
Authors: Pudji Astuti, C. P. C. Putro, C. M. Airin, L. Sjahfirdi, S. Widiyanto, H. Maheshwari
Abstract:
Stress of slaughter animals starting long before until at the time of process of slaughtering which cause misery and decrease of meat quality. Meanwhile, determination of animal stress using hormonal such as cortisol is expensive and less practical so that portable stress indicator for cows based on Fourier Transform Infrared Spectroscopy (FTIR) must be provided. The aims of this research are to find out the comparison process of slaughter between Rope Casting Local (RCL) and Restraining Box Method (RBM) by measuring of cortisol and wavelength in FTIR methods. Thirty two of male Ongole crossbred cattle were used in this experiment. Blood sampling was taken from jugular vein when they were rested and repeated when slaughtered. All of blood samples were centrifuged at 3000 rpm for 20 minutes to get serum, and then divided into two parts for cortisol assayed using ELISA and for measuring the wavelength using FTIR. The serum then measured at the wavelength between 4000-400 cm-1 using MB3000 FTIR. Band data absorption in wavelength of FTIR is analyzed descriptively by using FTIR Horizon MBTM. For RCL, average of serum cortisol when the animals rested were 11.47 ± 4.88 ng/mL, when the time of slaughter were 23.27 ± 7.84 ng/mL. For RBM, level of cortisol when rested animals were 13.67 ± 3.41 ng/mL and 53.47 ± 20.25 ng/mL during the slaughter. Based on student t-Test, there were significantly different between RBM and RCL methods when beef cattle were slaughtered (P < 0.05), but no significantly different when animals were rested (P > 0.05). Result of FTIR with the various of wavelength such as methyl group (=CH3) 2986cm-1, methylene (=CH2) 2827 cm-1, hydroxyl (-OH) 3371 cm-1, carbonyl (ketones) (C=O) 1636 cm-1, carboxyl (COO-1) 1408 cm-1, glucosa 1057 cm-1, urea 1011 cm-1have been obtained. It can be concluded that the RCL slaughtered method is better than the RBM method based on the increase of cortisol as an indicator of stress in beef cattle (P<0.05). FTIR is really possible to be used as stub of stress tool due to differentiate of resting and slaughter condition by recognizing the increase of absorption and the separation of component group at the wavelength.Keywords: cows, cortisol, FTIR, RBM, RCL, stress indicator
Procedia PDF Downloads 6434552 Lithuanian Sign Language Literature: Metaphors at the Phonological Level
Authors: Anželika Teresė
Abstract:
In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics
Procedia PDF Downloads 1414551 Traffic Prediction with Raw Data Utilization and Context Building
Authors: Zhou Yang, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao
Abstract:
Traffic prediction is essential in a multitude of ways in modern urban life. The researchers of earlier work in this domain carry out the investigation chiefly with two major focuses: (1) the accurate forecast of future values in multiple time series and (2) knowledge extraction from spatial-temporal correlations. However, two key considerations for traffic prediction are often missed: the completeness of raw data and the full context of the prediction timestamp. Concentrating on the two drawbacks of earlier work, we devise an approach that can address these issues in a two-phase framework. First, we utilize the raw trajectories to a greater extent through building a VLA table and data compression. We obtain the intra-trajectory features with graph-based encoding and the intertrajectory ones with a grid-based model and the technique of back projection that restore their surrounding high-resolution spatial-temporal environment. To the best of our knowledge, we are the first to study direct feature extraction from raw trajectories for traffic prediction and attempt the use of raw data with the least degree of reduction. In the prediction phase, we provide a broader context for the prediction timestamp by taking into account the information that are around it in the training dataset. Extensive experiments on several well-known datasets have verified the effectiveness of our solution that combines the strength of raw trajectory data and prediction context. In terms of performance, our approach surpasses several state-of-the-art methods for traffic prediction.Keywords: traffic prediction, raw data utilization, context building, data reduction
Procedia PDF Downloads 1314550 Effects of Variation of Centers in the Torsional Analysis of Asymmetrical Buildings by Performing Non Linear Static Analysis
Authors: Md Masihuddin Siddiqui, Abdul Haakim Mohammed
Abstract:
Earthquakes are the most unpredictable and devastating of all natural disasters. The behaviour of a building during an earthquake depends on several factors such as stiffness, adequate lateral strength, ductility, and configurations. The experience from the performance of buildings during past earthquakes has shown that the buildings with regular geometry, uniformly distributed mass and stiffness in plan as well as in elevation suffer much less damage compared to irregular configurations. The three centers namely- centre of mass, centre of strength, centre of stiffness are the torsional parameters which contribute to the strength of the building in case of an earthquake. Inertial forces and resistive forces in a structural system act through the center of mass and center of rigidity respectively which together oppose the forces that are produced during seismic excitation. So these centers of a structural system should be positioned where the structural system is the strongest so that the effects produced due to the earthquake may have a minimal effect on the structure. In this paper, the effects of variation of strength eccentricity and stiffness eccentricity in reducing the torsional responses of the asymmetrical buildings by using pushover analysis are studied. The maximum reduction of base torsion was observed in the case of minimum strength eccentricity, and the least reduction was observed in the case of minimum stiffness eccentricity.Keywords: strength eccentricity, stiffness eccentricity, asymmetric structure, base torsion, push over analysis
Procedia PDF Downloads 2974549 Influential Health Care System Rankings Can Conceal Maximal Inequities: A Simulation Study
Authors: Samuel Reisman
Abstract:
Background: Comparative rankings are increasingly used to evaluate health care systems. These rankings combine discrete attribute rankings into a composite overall ranking. Health care equity is a component of overall rankings, but excelling in other categories can counterbalance low inequity grades. Highly ranked inequitable health care would commend systems that disregard human rights. We simulated the ranking of a maximally inequitable health care system using a published, influential ranking methodology. Methods: We used The Commonwealth Fund’s ranking of eleven health care systems to simulate the rank of a maximally inequitable system. Eighty performance indicators were simulated, assuming maximal ineptitude in equity benchmarks. Maximal rankings in all non-equity subcategories were assumed. Subsequent stepwise simulations lowered all non-equity rank positions by one. Results: The maximally non-equitable health care system ranked first overall. Three subsequent stepwise simulations, lowering non-equity rankings by one, each resulted in an overall ranking within the top three. Discussion: Our results demonstrate that grossly inequitable health care systems can rank highly in comparative health care system rankings. These findings challenge the validity of ranking methodologies that subsume equity under broader benchmarks. We advocate limiting maximum overall rankings of health care systems to their individual equity rankings. Such limits are logical given the insignificance of health care system improvements to those lacking adequate health care.Keywords: global health, health equity, healthcare systems, international health
Procedia PDF Downloads 4054548 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 2324547 Modern Scotland Yard: Improving Surveillance Policies Using Adversarial Agent-Based Modelling and Reinforcement Learning
Authors: Olaf Visker, Arnout De Vries, Lambert Schomaker
Abstract:
Predictive policing refers to the usage of analytical techniques to identify potential criminal activity. It has been widely implemented by various police departments. Being a relatively new area of research, there are, to the author’s knowledge, no absolute tried, and true methods and they still exhibit a variety of potential problems. One of those problems is closely related to the lack of understanding of how acting on these prediction influence crime itself. The goal of law enforcement is ultimately crime reduction. As such, a policy needs to be established that best facilitates this goal. This research aims to find such a policy by using adversarial agent-based modeling in combination with modern reinforcement learning techniques. It is presented here that a baseline model for both law enforcement and criminal agents and compare their performance to their respective reinforcement models. The experiments show that our smart law enforcement model is capable of reducing crime by making more deliberate choices regarding the locations of potential criminal activity. Furthermore, it is shown that the smart criminal model presents behavior consistent with popular crime theories and outperforms the baseline model in terms of crimes committed and time to capture. It does, however, still suffer from the difficulties of capturing long term rewards and learning how to handle multiple opposing goals.Keywords: adversarial, agent based modelling, predictive policing, reinforcement learning
Procedia PDF Downloads 1524546 The Application of King IV by Rugby Clubs Affiliated to a Rugby Union in South Africa
Authors: Anouschka Swart
Abstract:
In 2023, sport faces a plethora of challenges including but not limited to match-fixing, corruption and doping to its integrity that, threatens both the commercial and public appeal. The continuous changes and commercialisation that has occurred within sport have led to a variety of consequences resulting in the need for ethics to be revived, as it used to be in the past to ensure sport is not in danger. In order to understand governance better, the Institute of Directors in Southern Africa, a global network of professional firms providing Audit, Tax and Advisory services, outlined a process explaining all elements with regards to corporate governance. This process illustrates a governing body’s responsibilities as strategy, policy, oversight and accountability. These responsibilities are further elucidated to 16 governing principles which are highlighted as essential for all organisations in order to achieve and deliver on effective governance outcomes. These outcomes are good ethical culture, good performance, effective control and legitimacy therefore, the aim of the study was to investigate the general state of governance within the clubs affiliated with a rugby club in South Africa by utilizing the King IV Code as the framework. The results indicated that the King Code IV principles are implemented by these rugby clubs to ensure they demonstrate commitment to corporate governance to both internal and external stakeholders. It is however evident that a similar report focused solely on sport is a necessity in the industry as this will provide more clarity on sport specific problems.Keywords: South Africa, sport, King IV, responsibilities
Procedia PDF Downloads 744545 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models
Authors: Robin Molinier
Abstract:
Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.Keywords: business model, capacity, sourcing, synergies
Procedia PDF Downloads 1774544 Critical Success Factors of OCOP Business Model in Pattani Province Thailand: A Qualitative Approach
Authors: Poonsook Thatchaopas, Nik Kamariah Nikmat, Nattakarn Eakuru
Abstract:
Since 2003, the Thai Government has implemented several initiatives to encourage and incubate entrepreneurial skills and motivation among her citizens. One of the initiatives is the “One College One Product” business model or well known as ‘OCOP’, launched by the Vocational Education Commission to encourage partnership between college students to choose at least one product for business venture. In line with this mission, several business enterprises were established such as food products, restaurants, spa, Thai massage, minimart, computer maintenance, karaoke centre, internet café, mini theater etc. Currently, these business incubator projects can be observed at 404 vocational colleges and 21 incubation centres to encourage entrepreneurial small and medium enterprise (SME) development. However, the number of successful OCOP projects is still minimal. Out of the 404 individual OCOP projects at Vocational Colleges around Thailand, very few became successful. The objective of this paper is to identify the critical success factors needed to be a successful OCOP business entrepreneur. This study uses qualitative method by interviewing business partners of an OCOP business called Crispy Roti Krua Acheeva Brand (CRKAB). It is a snack food company that is developed at Pattani Vocational College in South Thailand. This project was initiated by three female entrepreneurs who were alumni student cum owners of the CRKAB. The finding shows that the main critical success factors are self-confidence, creativity or innovativeness, knowledge, skills and perseverance. Additionally, they reiterated that the keys to business success are product quality, perceived price, promotion, branding, new packaging to increase sales and continuous developments. The results implies for a student business SME to be successful, the company should have credible partners and effective marketing plan.Keywords: student entrepreneurship, business incubator, food industry, qualitative, Thailand
Procedia PDF Downloads 3954543 Analysis of Maternal Death Surveillance and Response: Causes and Contributing Factors in Addis Ababa, Ethiopia, 2022
Authors: Sisay Tiroro Salato
Abstract:
Background: Ethiopia has been implementing the maternal death surveillance and response system to provide real-time actionable information, including causes of death and contributing factors. Analysis of maternal mortality surveillance data was conducted to identify the causes and underlying factors in Addis Ababa, Ethiopia. Methods: We carried out a retrospective surveillance data analysis of 324 maternal deaths reported in Addis Ababa, Ethiopia, from 2017 to 2021. The data were extracted from the national maternal death surveillance and response database, including information from case investigation, verbal autopsy, and facility extraction forms. The data were analyzed by computing frequency and presented in numbers, proportions, and ratios. Results: Of 324 maternal deaths, 92% died in the health facilities, 6.2% in transit, and 1.5% at home. The mean age at death was 28 years, ranging from 17 to 45. The maternal mortality ratio per 100,000 live births was 77for the five years, ranging from 126 in 2017 to 21 in 2021. The direct and indirect causes of death were responsible for 87% and 13%, respectively. The direct causes included obstetric haemorrhage, hypertensive disorders in pregnancy, puerperal sepsis, embolism, obstructed labour, and abortion. The third delay (delay in receiving care after reaching health facilities) accounted for 57% of deaths, while the first delay (delay in deciding to seek health care) and the second delay (delay in reaching health facilities) and accounted for 34% and 24%, respectively. Late arrival to the referral facility, delayed management after admission, andnon-recognition of danger signs were underlying factors. Conclusion: Over 86% of maternal deaths were attributed by avoidable direct causes. The majority of women do try to reach health services when an emergency occurs, but the third delays present a major problem. Improving the quality of care at the healthcare facility level will help to reduce maternal death.Keywords: maternal death, surveillance, delays, factors
Procedia PDF Downloads 116