Search results for: spiral model
3018 The Population Death Model and Influencing Factors from the Data of The "Sixth Census": Zhangwan District Case Study
Authors: Zhou Shangcheng, Yi Sicen
Abstract:
Objective: To understand the mortality patterns of Zhangwan District in 2010 and provide the basis for the development of scientific and rational health policy. Methods: Data are collected from the Sixth Census of Zhangwan District and disease surveillance system. The statistical analysis include death difference between age, gender, region and time and the related factors. Methods developed for the Global Burden of Disease (GBD) Study by the World Bank and World Health Organization (WHO) were adapted and applied to Zhangwan District population health data. DALY rate per 1,000 was calculated for varied causes of death. SPSS 16 is used by statistic analysis. Results: From the data of death population of Zhangwan District we know the crude mortality rate was 6.03 ‰. There are significant differences of mortality rate in male and female population which was respectively 7.37 ‰ and 4.68 ‰. 0 age group population life expectancy in Zhangwan District in 2010 was 78.40 years old(Male 75.93, Female 81.03). The five leading causes of YLL in descending order were: cardiovascular diseases(42.63DALY/1000), malignant neoplasm (23.73DALY/1000), unintentional injuries (5.84DALY/1000), Respiratory diseases(5.43 DALY/1000), Respiratory infections (2.44DALY/1000). In addition, there are strong relation between the marital status , educational level and mortality in some to a certain extend. Conclusion Zhangwan District, as city level, is at lower mortality levels. The mortality of the total population of Zhangwan District has a downward trend and life expectancy is rising.Keywords: sixth census, Zhangwan district, death level differences, influencing factors, cause of death
Procedia PDF Downloads 2693017 Fractal Nature of Granular Mixtures of Different Concretes Formulated with Different Methods of Formulation
Authors: Fatima Achouri, Kaddour Chouicha, Abdelwahab Khatir
Abstract:
It is clear that concrete of quality must be made with selected materials chosen in optimum proportions that remain after implementation, a minimum of voids in the material produced. The different methods of formulations what we use, are based for the most part on a granular curve which describes an ‘optimal granularity’. Many authors have engaged in fundamental research on granular arrangements. A comparison of mathematical models reproducing these granular arrangements with experimental measurements of compactness have to verify that the minimum porosity P according to the following extent granular exactly a power law. So the best compactness in the finite medium are obtained with power laws, such as Furnas, Fuller or Talbot, each preferring a particular setting between 0.20 and 0.50. These considerations converge on the assumption that the optimal granularity Caquot approximates by a power law. By analogy, it can then be analyzed as a granular structure of fractal-type since the properties that characterize the internal similarity fractal objects are reflected also by a power law. Optimized mixtures may be described as a series of installments falling granular stuff to better the tank on a regular hierarchical distribution which would give at different scales, by cascading effects, the same structure to the mix. Likely this model may be appropriate for the entire extent of the size distribution of the components, since the cement particles (and silica fume) correctly deflocculated, micrometric dimensions, to chippings sometimes several tens of millimeters. As part of this research, the aim is to give an illustration of the application of fractal analysis to characterize the granular concrete mixtures optimized for a so-called fractal dimension where different concretes were studying that we proved a fractal structure of their granular mixtures regardless of the method of formulation or the type of concrete.Keywords: concrete formulation, fractal character, granular packing, method of formulation
Procedia PDF Downloads 2583016 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures
Authors: Milad Abbasi
Abstract:
Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network
Procedia PDF Downloads 1513015 Body Mass Components in Young Soccer Players
Authors: Elizabeta Sivevska, Sunchica Petrovska, Vaska Antevska, Lidija Todorovska, Sanja Manchevska, Beti Dejanova, Ivanka Karagjozova, Jasmina Pluncevic Gligoroska
Abstract:
Introduction: Body composition plays an important role in the selection of young soccer players and it is associated with their successful performance. The most commonly used model of body composition divides the body into two compartments: fat components and fat-free mass (muscular and bone components). The aims of the study were to determine the body composition parameters of young male soccer players and to show the differences in age groups. Material and methods: A sample of 52 young male soccer players, with an age span from 9 to 14 years were divided into two groups according to the age (group 1 aged 9 to 12 years and group 2 aged 12 to 14 years). Anthropometric measurements were taken according to the method of Mateigka. The following measurements were made: body weight, body height, circumferences (arm, forearm, thigh and calf), diameters (elbow, knee, wrist, ankle) and skinfold thickness (biceps, triceps, thigh, leg, chest, abdomen). The measurements were used in Mateigka’s equations. Results: Body mass components were analyzed as absolute values (in kilograms) and as percentage values: the muscular component (MC kg and MC%), the bone component (BCkg and BC%) and the body fat (BFkg and BF%). The group up to 12 years showed the following mean values of the analyzed parameters: MM=21.5kg; MM%=46.3%; BC=8.1kg; BC%=19.1%; BF= 6.3kg; BF%= 15.7%. The second group aged 12-14 year had mean values of body composition parameters as follows: MM=25.6 kg; MM%=48.2%; BC = 11.4 kg; BC%=21.6%; BF= 8.5 kg; BF%= 14. 7%. Conclusions: The young soccer players aged 12 up to 14 years who are in the pre-pubertal phase of growth and development had higher bone component (p<0.05) compared to younger players. There is no significant difference in muscular and fat body component between the two groups of young soccer players.Keywords: body composition, young soccer players, body fat, fat-free mass
Procedia PDF Downloads 4573014 A Study of Body Weight and Type Traits Recorded on Hairy Goat in Punjab, Pakistan
Authors: A. Qayyum, G. Bilal, H. M. Waheed
Abstract:
The objectives of the study were to determine phenotypic variations in Hairy goats for quantitative and qualitative traits and to analyze the relationship between different body measurements and body weight in Hairy goats. Data were collected from the Barani Livestock Production Research Institute (BLPRI) at Kherimurat, Attock and potential farmers who were raising hairy goats in the Potohar region. Twelve (12) phenotypic parameters were measured on 99 adult Hairy goat (18 male and 81 female). Four qualitative and 8 quantitative traits were investigated. Qualitative traits were visually observed and expressed as percentages. Descriptive analysis was done on quantitative variables. All hairy goats had predominately black body coat color (72%), whereas white (11%) and brown (11%) body coat color were also observed. Both the pigmented (45.5%) and non-pigmented (54.5%) type of body skin were observed in the goat breed. Horns were present in the majority (91%) of animals. Most of the animals (83%) had straight facial head profiles. Analysis was performed in SAS On-Demand for Academics using PROC mixed model procedure. Overall means ± SD of body weight (BW), body length (BL), height at wither (HAW), ear length (EL), head length (HL), heart girth (HG), tail length (TL) and MC (muzzle circumference) were 41.44 ± 12.21 kg, 66.40 ± 7.87 cm, 75.17 ± 7.83 cm, 22.99 ± 6.75 cm, 15.07 ± 3.44 cm, 76.54 ± 8.80 cm, 18.28 ± 4.18 cm, and 26.24 ± 5.192 cm, respectively. Sex had a significant effect on BL and HG (P < 0.05), whereas BW, HAW, EL, HL, TL, and MC were not significantly affected (P > 0.05). The herd had a significant effect on BW, BL, HAW, HL, HG, and TL (P < 0.05) except EL and MC (P > 0.05). Hairy goats appear to have the potential for selection as mutton breeds in the Potohar region of Punjab. The findings of the present study would help in the characterization and conservation of hairy goats using genetic and genomic tools in the future.Keywords: body weight, Hairy goat, type traits Punjab, Pakistan
Procedia PDF Downloads 633013 The Effect of an Infill on the Bearing Capacity and Stiffness of Infilled Frames
Authors: Goran Baloevic, Jure Radnic, Nikola Grgic
Abstract:
The application of frames with masonry or panel infill is common in the engineering practice. In these cases, a frame is often considered to be a primary structure, while an infill is considered to be a secondary structure. In past calculations, the infill was rarely included in the design of frame structures in terms of their bearing capacity and safety. Recent calculations of such structures necessarily include the effect of infill since it contributes to stiffness and bearing capacity of overall system, especially under horizontal loads. In certain cases, if the infill is not included in the seismic design of frame structures, the result can be lower design safety. However, since the different configuration of the infill through the building’s height can be made, it is possible that contribution of such infill to the overall bearing capacity can be lower and seismic forces on the building can be increased due to greater stiffness of the structure. So far, many experimental and numerical researches on the behavior of infilled frames under horizontal static forces and earthquake have been performed. In this paper, several masonry-infilled concrete and steel frames under horizontal static forces and earthquake are analysed. The experimental results by shake-table and numerical results are compared in terms of the bearing capacity of bare and infilled frames. Herein, the stiffness of frames and infill were varied, with different position of the infill and different types of openings. Cases with positive and negative effects of the infill to the bearing capacity of the frames were considered. Finally, main conclusions and recommendations for practical application and design of masonry-infilled concrete and steel frames are given.Keywords: bearing capacity, infilled frame, numerical model, shake table
Procedia PDF Downloads 4633012 Modified Clusterwise Regression for Pavement Management
Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella
Abstract:
Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.Keywords: clusterwise regression, pavement management system, performance model, optimization
Procedia PDF Downloads 2493011 The Essence of Culture and Religion in Creating Disaster Resilient Societies through Corporate Social Responsibility
Authors: Repaul Kanji, Rajat Agrawal
Abstract:
In this era where issues like climate change and disasters are the topics of discussion at national and international forums, it is very often that humanity questions the causative role of corporates in such events. It is beyond any doubt that rapid industrialisation and development has taken a toll in the form of climate change and even disasters, in some case. Thus, demanding to fulfill a corporate's responsibilities in the form of rescue and relief in times of disaster, rehabilitation and even mitigation and preparedness to adapt to the oncoming changes is obvious. But how can the responsibilities of the corporates be channelised to ensure all this, i.e., develop a resilient society? More than that, which factors, when emphasised upon, can lead to the holistic development of the society. To answer this query, an extensive literature review was done to identify several enablers like legislations of a nation, the role of brand and reputation, ease of doing Corporate Social Responsibility, mission and vision of an organisation, religion and culture, etc. as a tool for building disaster resilience. A questionnaire survey, interviews with experts and academicians followed by interpretive structural modelling (ISM) were used to construct a multi-hierarchy model depicting the contextual relationship among the identified enablers. The study revealed that culture and religion are the most powerful driver, which affects other enablers either directly or indirectly. Taking cognisance of the fact that an idea of separation between religion and workplace (business) resides subconsciously within the society, the study tries to interpret the outcome of the ISM through the lenses of past researches (The Integrating Box) and explores how it can be leveraged to build a resilient society.Keywords: corporate social responsibility, interpretive structural modelling, disaster resilience and risk reduction, the integration box (TIB)
Procedia PDF Downloads 2063010 Research Progress of the Relationship between Urban Rail Transit and Residents' Travel Behavior during 1999-2019: A Scientific Knowledge Mapping Based on Citespace and Vosviewer
Authors: Zheng Yi
Abstract:
Among the attempts made worldwide to foster urban and transport sustainability, transit-oriented development certainly is one of the most successful. Residents' travel behavior is a concern in the researches about the impacts of transit-oriented development. The study takes 620 English journal papers in the core collection database of Web of Science as the study objects; the paper tries to map out the scientific knowledge mapping in the field and draw the basic conditions by co-citation analysis, co-word analysis, a total of citation network analysis and visualization techniques. This study teases out the research hotspots and evolution of the relationship between urban rail transit and resident's travel behavior from 1999 to 2019. According to the results of the analysis of the time-zone view and burst-detection, the paper discusses the trend of the next stage of international study. The results show that in the past 20 years, the research focuses on these keywords: land use, behavior, model, built environment, impact, travel behavior, walking, physical activity, smart card, big data, simulation, perception. According to different research contents, the key literature is further divided into these topics: the attributes of the built environment, land use, transportation network, transportation policies. The results of this paper can help to understand the related researches and achievements systematically. These results can also provide a reference for identifying the main challenges that relevant researches need to address in the future.Keywords: urban rail transit, travel behavior, knowledge map, evolution of researches
Procedia PDF Downloads 1063009 Role of Transient Receptor Potential Vanilloid 1 in Electroacupuncture Analgesia on Chronic Inflammatory Pain in Mice
Authors: Jun Yang, Ching-Liang Hsieh, Yi-Wen Lin
Abstract:
Chronic inflammatory pain results from peripheral tissue injury or local inflammation to increase the release of protons, histamines, adenosine triphosphate, and several proinflammatory cytokines. Transient receptor potential vanilloid 1 (TRPV1) is involved in fibromyalgia, neuropathic, and inflammatory pain; however, its exact mechanisms in chronic inflammatory pain are still unclear. We investigate the analgesic effect of EA by injecting complete Freund’s adjuvant (CFA) in the hind paw of mice to induce chronic inflammatory pain ( > 14 d). Our results showed that EA significantly reduced chronic mechanical and thermal hyperalgesia in the chronic inflammatory pain model. Chronic mechanical and thermal hyperalgesia was also abolished in TRPV1−/− mice. TRPV1 increased in the dorsal root ganglion (DRG) and spinal cord (SC) at 2 weeks after CFA injection. The expression levels of downstream molecules such as pPKA, pPI3K, and pPKC increased, as did those of pERK, pp38, and pJNK. Transcription factors (pCREB and pNFκB) and nociceptive ion channels (Nav1.7 and Nav1.8) were involved in this process. Inflammatory mediators such as GFAP (Glial fibrillary acidic protein), S100B, and RAGE (Receptor for advanced glycation endproducts) were also involved. The expression levels of these molecules were reduced in EA (electroacupuncture) and TRPV1−/−mice but not in the sham EA group. The present study demonstrated that EA or TRPV1 gene deletion reduced chronic inflammatory pain through TRPV1 and related molecules. In addition, our data provided evidence to support the clinical use of EA for treating chronic inflammatory pain.Keywords: auricular electric-stimulation, epileptic seizures, anti-inflammation, electroacupuncture
Procedia PDF Downloads 1743008 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis
Authors: Syed Toqueer Akhter, Hussain Hamid
Abstract:
Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model
Procedia PDF Downloads 10273007 DNA Damage and Apoptosis Induced in Drosophila melanogaster Exposed to Different Duration of 2400 MHz Radio Frequency-Electromagnetic Fields Radiation
Authors: Neha Singh, Anuj Ranjan, Tanu Jindal
Abstract:
Over the last decade, the exponential growth of mobile communication has been accompanied by a parallel increase in density of electromagnetic fields (EMF). The continued expansion of mobile phone usage raises important questions as EMF, especially radio frequency (RF), have long been suspected of having biological effects. In the present experiments, we studied the effects of RF-EMF on cell death (apoptosis) and DNA damage of a well- tested biological model, Drosophila melanogaster exposed to 2400 MHz frequency for different time duration i.e. 2 hrs, 4 hrs, 6 hrs,8 hrs, 10 hrs, and 12 hrs each day for five continuous days in ambient temperature and humidity conditions inside an exposure chamber. The flies were grouped into control, sham-exposed, and exposed with 100 flies in each group. In this study, well-known techniques like Comet Assay and TUNEL (Terminal deoxynucleotide transferase dUTP Nick End Labeling) Assay were used to detect DNA damage and for apoptosis studies, respectively. Experiments results showed DNA damage in the brain cells of Drosophila which increases as the duration of exposure increases when observed under the observed when we compared results of control, sham-exposed, and exposed group which indicates that EMF radiation-induced stress in the organism that leads to DNA damage and cell death. The process of apoptosis and mutation follows similar pathway for all eukaryotic cells; therefore, studying apoptosis and genotoxicity in Drosophila makes similar relevance for human beings as well.Keywords: cell death, apoptosis, Comet Assay, DNA damage, Drosophila, electromagnetic fields, EMF, radio frequency, RF, TUNEL assay
Procedia PDF Downloads 1673006 Typical Characteristics and Compositions of Solvent System in Application of Maceration Technology to Isolate Antioxidative Activated Extract of Natural Products
Authors: Yohanes Buang, Suwari
Abstract:
Increasing interest of society in use and creation of herbal medicines has encouraged scientists/researchers to establish an ideal method to produce the best quality and quantity of pharmaceutical extracts. To have highest the antioxidative extracts, the method used must be at optimum conditions. Hence, the best method is not only able to provide highest quantity and quality of the isolated pharmaceutical extracts but also it has to be easy to do, simple, fast, and cheap. The characterization of solvents in maceration technique, in present study, involved various variables influencing quantity and quality of the pharmaceutical extracts, such as solvent’s optimum acidity-alkalinity (pH), temperature, concentration, and contact time. The shifting polarity of the solvent by combinations of water with ethanol (70:30) and (50:50) were also performed to completely record the best solvent system in application of maceration technology. Among those three solvents threated within Myrmecodia pendens, as a model of natural product, the results showed that water solvent system with conditions of alkalinity pH, optimum temperature, concentration, and contact time, is the best system to perform the maceration in order to have the highest isolated antioxidative activated extracts. The optimum conditions of the water solvent are at the alkalinity pH 9 up, 30 mg/mL of concentration, 40 min of contact time, 100 °C of temperature, and no ethanol used to replace parts of the water solvent. The present study strongly recommended the best conditions of solvent system to isolate the pharmaceutical extracts of natural products in application of the maceration technology.Keywords: extracts, herbal medicine, natural product, maceration technique
Procedia PDF Downloads 2983005 Lithium Ion Supported on TiO2 Mixed Metal Oxides as a Heterogeneous Catalyst for Biodiesel Production from Canola Oil
Authors: Mariam Alsharifi, Hussein Znad, Ming Ang
Abstract:
Considering the environmental issues and the shortage in the conventional fossil fuel sources, biodiesel has gained a promising solution to shift away from fossil based fuel as one of the sustainable and renewable energy. It is synthesized by transesterification of vegetable oils or animal fats with alcohol (methanol or ethanol) in the presence of a catalyst. This study focuses on synthesizing a high efficient Li/TiO2 heterogeneous catalyst for biodiesel production from canola oil. In this work, lithium immobilized onto TiO2 by the simple impregnation method. The catalyst was evaluated by transesterification reaction in a batch reactor under moderate reaction conditions. To study the effect of Li concentrations, a series of LiNO3 concentrations (20, 30, 40 wt. %) at different calcination temperatures (450, 600, 750 ºC) were evaluated. The Li/TiO2 catalysts are characterized by several spectroscopic and analytical techniques such as XRD, FT-IR, BET, TG-DSC and FESEM. The optimum values of impregnated Lithium nitrate on TiO2 and calcination temperature are 30 wt. % and 600 ºC, respectively, along with a high conversion to be 98 %. The XRD study revealed that the insertion of Li improved the catalyst efficiency without any alteration in structure of TiO2 The best performance of the catalyst was achieved when using a methanol to oil ratio of 24:1, 5 wt. % of catalyst loading, at 65◦C reaction temperature for 3 hours of reaction time. Moreover, the experimental kinetic data were compatible with the pseudo-first order model and the activation energy was (39.366) kJ/mol. The synthesized catalyst Li/TiO2 was applied to trans- esterify used cooking oil and exhibited a 91.73% conversion. The prepared catalyst has shown a high catalytic activity to produce biodiesel from fresh and used oil within mild reaction conditions.Keywords: biodiesel, canola oil, environment, heterogeneous catalyst, impregnation method, renewable energy, transesterification
Procedia PDF Downloads 1743004 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces
Authors: Shweta Singh, Sudaman Katti
Abstract:
The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity
Procedia PDF Downloads 1353003 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions
Procedia PDF Downloads 3063002 An Assessment into Impact of Regional Conflicts upon Socio-Political Sustainability in Pakistan
Authors: Syed Toqueer Akhter, Muhammad Muzaffar Abbas
Abstract:
Conflicts in Pakistan are a result of a configuration of factors, which are directly related to the system of the state, the unstable regional setting, and the geo-strategic location of Pakistan at large. This paper examines the impact of regional conflict onto the socio-political sustainability of Pakistan. The magnitude of the spillover from a conflicted region is similar in size of the equivalent increase in domestic conflict. Pakistan has gone at war three times with India; the border with India is named as the tensest borderlines of the world. Disagreements with India and lack of dispute settlement mechanisms have negatively effected the peace in the region, influx of illegal weapons and refugees from Afghanistan as an outcome of 9/11 incidence, have exasperated the criticality of levels of internal conflict in Pakistan. Our empirical findings are based on the data collected on regional conflict levels, regional trade, global trade, comparative defence capabilities of the region in contrast to Pakistan and the government regime (Autocratic, Democratic) over 1972-2007. It has been proposed in this paper that the intent of domestic conflict is associated with the conflict in the region, regional trade, global trade and the government regime of Pakistan. The estimated model (OLS) implies that domestic conflict is effected positively and significantly with long term impact of conflict in the region. Also, if defence capabilities of the region are better than that of Pakistan it effects domestic conflict positively and significantly. Conflict in neighbouring countries are found as a source of domestic conflict in Pakistan, whereas the regional trade as well as type of government regimes in Pakistan lowered the intensity of domestic conflict significantly, while globalized trade imply risk of domestic conflict to be reduced but not significantly.Keywords: conflict, regional trade, socio-politcal instability
Procedia PDF Downloads 3193001 Towards a Multilevel System of Talent Management in Small And Medium-Sized Enterprises: French Context Exploration
Authors: Abid Kousay
Abstract:
Appeared and developed essentially in large companies and multinationals, Talent Management (TM) in Small and Medium-Sized Enterprises (SMEs) has remained an under-explored subject till today. Although the literature on TM in the Anglo-Saxon context is developing, it remains monopolized in non-European contexts, especially in France. Therefore, this article aims to address these shortcomings through contributing to TM issues, by adopting a multilevel approach holding the goal of reaching a global holistic vision of interactions between various levels, while applying TM. A qualitative research study carried out within 12 SMEs in France, built on the methodological perspective of grounded theory, will be used in order to go beyond description, to generate or discover a theory or even a unified theoretical explanation. Our theoretical contributions are the results of the grounded theory, the fruit of context considerations and the dynamic of the multilevel approach. We aim firstly to determine the perception of talent and TM in SMEs. Secondly, we formalize TM in SME through the empowerment of all 3 levels in the organization (individual, collective, and organizational). And we generate a multilevel dynamic system model, highlighting the institutionalization dimension in SMEs and the managerial conviction characterized by the domination of the leader's role. Thirdly, this first study shed the light on the importance of rigorous implementation of TM in SMEs in France by directing CEO and HR and TM managers to focus on elements that upstream TM implementation and influence the system internally. Indeed, our systematic multilevel approach policy reminds them of the importance of the strategic alignment while translating TM policy into strategies and practices in SMEs.Keywords: French context, institutionalization, talent, multilevel approach, talent management system
Procedia PDF Downloads 2003000 Research on Level Adjusting Mechanism System of Large Space Environment Simulator
Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng
Abstract:
Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism
Procedia PDF Downloads 2452999 Application of a Compact Wastewater Treatment Unit in a Rural Area
Authors: Mohamed El-Khateeb
Abstract:
Encompassing inventory, warehousing, and transportation management, logistics is a crucial predictor of firm performance. This has been extensively proven by extant literature in business and operations management. Logistics is also a fundamental determinant of a country's ability to access international markets. Available studies in international and transport economics have shown that limited transport infrastructure and underperforming transport services can severely affect international competitiveness. However, the evidence lacks the overall impact of logistics performance-encompassing all inventory, warehousing, and transport components- on global trade. In order to fill this knowledge gap, the paper uses a gravitational trade model with 155 countries from all geographical regions between 2007 and 2018. Data on logistics performance is obtained from the World Bank's Logistics Performance Index (LPI). First, the relationship between logistics performance and a country’s total trade is estimated, followed by a breakdown by the economic sector. Then, the analysis is disaggregated according to the level of technological intensity of traded goods. Finally, after evaluating the intensive margin of trade, the relevance of logistics infrastructure and services for the extensive trade margin is assessed. Results suggest that: (i) improvements in both logistics infrastructure and services are associated with export growth; (ii) manufactured goods can significantly benefit from these improvements, especially when both exporting and importing countries increase their logistics performance; (iii) the quality of logistics infrastructure and services becomes more important as traded goods are technology-intensive; and (iv) improving the exporting country's logistics performance is essential in the intensive margin of trade while enhancing the importing country's logistics performance is more relevant in the extensive margin.Keywords: low-cost, recycling, reuse, solid waste, wastewater treatment
Procedia PDF Downloads 1952998 Harnessing Deep-Level Metagenomics to Explore the Three Dynamic One Health Areas: Healthcare, Domiciliary and Veterinary
Authors: Christina Killian, Katie Wall, Séamus Fanning, Guerrino Macori
Abstract:
Deep-level metagenomics offers a useful technical approach to explore the three dynamic One Health axes: healthcare, domiciliary and veterinary. There is currently limited understanding of the composition of complex biofilms, natural abundance of AMR genes and gene transfer occurrence in these ecological niches. By using a newly established small-scale complex biofilm model, COMBAT has the potential to provide new information on microbial diversity, antimicrobial resistance (AMR)-encoding gene abundance, and their transfer in complex biofilms of importance to these three One Health axes. Shotgun metagenomics has been used to sample the genomes of all microbes comprising the complex communities found in each biofilm source. A comparative analysis between untreated and biocide-treated biofilms is described. The basic steps include the purification of genomic DNA, followed by library preparation, sequencing, and finally, data analysis. The use of long-read sequencing facilitates the completion of metagenome-assembled genomes (MAG). Samples were sequenced using a PromethION platform, and following quality checks, binning methods, and bespoke bioinformatics pipelines, we describe the recovery of individual MAGs to identify mobile gene elements (MGE) and the corresponding AMR genotypes that map to these structures. High-throughput sequencing strategies have been deployed to characterize these communities. Accurately defining the profiles of these niches is an essential step towards elucidating the impact of the microbiota on each niche biofilm environment and their evolution.Keywords: COMBAT, biofilm, metagenomics, high-throughput sequencing
Procedia PDF Downloads 542997 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City
Authors: Christian Kapuku, Seung-Young Kho
Abstract:
An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.Keywords: geographic information system (GIS), network construction, transportation database, open source data
Procedia PDF Downloads 1662996 Numerical Studies on 2D and 3D Boundary Layer Blockage and External Flow Choking at Wing in Ground Effect
Authors: K. Dhanalakshmi, N. Deepak, E. Manikandan, S. Kanagaraj, M. Sulthan Ariff Rahman, P. Chilambarasan C. Abhimanyu, C. A. Akaash Emmanuel Raj, V. R. Sanal Kumar
Abstract:
In this paper using a validated double precision, density-based implicit standard k-ε model, the detailed 2D and 3D numerical studies have been carried out to examine the external flow choking at wing-in-ground (WIG) effect craft. The CFD code is calibrated using the exact solution based on the Sanal flow choking condition for adiabatic flows. We observed that at the identical WIG effect conditions the numerically predicted 2D boundary layer blockage is significantly higher than the 3D case and as a result, the airfoil exhibited an early external flow choking than the corresponding wing, which is corroborated with the exact solution. We concluded that, in lieu of the conventional 2D numerical simulation, it is invariably beneficial to go for a realistic 3D simulation of the wing in ground effect, which is analogous and would have the aspects of a real-time parametric flow. We inferred that under the identical flying conditions the chances of external flow choking at WIG effect is higher for conventional aircraft than an aircraft facilitating a divergent channel effect at the bottom surface of the fuselage as proposed herein. We concluded that the fuselage and wings integrated geometry optimization can improve the overall aerodynamic performance of WIG craft. This study is a pointer to the designers and/or pilots for perceiving the zone of danger a priori due to the anticipated external flow choking at WIG effect craft for safe flying at the close proximity of the terrain and the dynamic surface of the marine.Keywords: boundary layer blockage, chord dominated ground effect, external flow choking, WIG effect
Procedia PDF Downloads 2692995 The Effect of Corporate Governance on Financial Stability and Solvency Margin for Insurance Companies in Jordan
Authors: Ghadeer A.Al-Jabaree, Husam Aldeen Al-Khadash, M. Nassar
Abstract:
This study aimed at investigating the effect of well-designed corporate governance system on the financial stability of insurance companies listed in ASE. Further, this study provides a comprehensive model for evaluating and analyzing insurance companies' financial position and prospective for comparing the degree of corporate governance application provisions among Jordanian insurance companies. In order to achieve the goals of the study, a whole population that consist of (27) listed insurance companies was introduced through the variables of (board of director, audit committee, internal and external auditor, board and management ownership and block holder's identities). Statistical methods were used with alternative techniques by (SPSS); where descriptive statistical techniques such as means, standard deviations were used to describe the variables, while (F) test and ANOVA analysis of variance were used to test the hypotheses of the study. The study revealed the existence of significant effect of corporate governance variables except local companies that are not listed in ASE on financial stability within control variables especially debt ratio (leverage),where it's also showed that concentration in motor third party doesn't have significant effect on insurance companies' financial stability during study period. Moreover, the study concludes that Global financial crisis affect the investment side of insurance companies with insignificant effect on the technical side. Finally, some recommendations were presented such as enhancing the laws and regulation that help the appropriate application of corporate governance, and work on activating the transparency in the disclosures of the financial statements and focusing on supporting the technical provisions for the companies, rather than focusing only on profit side.Keywords: corporate governance, financial stability and solvency margin, insurance companies, Jordan
Procedia PDF Downloads 4882994 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers
Authors: Onur Kaya, Halit Bayer
Abstract:
Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing
Procedia PDF Downloads 2462993 Experimental Monitoring of the Parameters of the Ionosphere in the Local Area Using the Results of Multifrequency GNSS-Measurements
Authors: Andrey Kupriyanov
Abstract:
In recent years, much attention has been paid to the problems of ionospheric disturbances and their influence on the signals of global navigation satellite systems (GNSS) around the world. This is due to the increase in solar activity, the expansion of the scope of GNSS, the emergence of new satellite systems, the introduction of new frequencies and many others. The influence of the Earth's ionosphere on the propagation of radio signals is an important factor in many applied fields of science and technology. The paper considers the application of the method of transionospheric sounding using measurements from signals from Global Navigation Satellite Systems to determine the TEC distribution and scintillations of the ionospheric layers. To calculate these parameters, the International Reference Ionosphere (IRI) model of the ionosphere, refined in the local area, is used. The organization of operational monitoring of ionospheric parameters is analyzed using several NovAtel GPStation6 base stations. It allows performing primary processing of GNSS measurement data, calculating TEC and fixing scintillation moments, modeling the ionosphere using the obtained data, storing data and performing ionospheric correction in measurements. As a result of the study, it was proved that the use of the transionospheric sounding method for reconstructing the altitude distribution of electron concentration in different altitude range and would provide operational information about the ionosphere, which is necessary for solving a number of practical problems in the field of many applications. Also, the use of multi-frequency multisystem GNSS equipment and special software will allow achieving the specified accuracy and volume of measurements.Keywords: global navigation satellite systems (GNSS), GPstation6, international reference ionosphere (IRI), ionosphere, scintillations, total electron content (TEC)
Procedia PDF Downloads 1802992 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score
Procedia PDF Downloads 1332991 A Network Optimization Study of Logistics for Enhancing Emergency Preparedness in Asia-Pacific
Authors: Giuseppe Timperio, Robert De Souza
Abstract:
The combination of factors such as temperamental climate change, rampant urbanization of risk exposed areas, political and social instabilities, is posing an alarming base for the further growth of number and magnitude of humanitarian crises worldwide. Given the unique features of humanitarian supply chain such as unpredictability of demand in space, time, and geography, spike in the number of requests for relief items in the first days after the calamity, uncertain state of logistics infrastructures, large volumes of unsolicited low-priority items, a proactive approach towards design of disaster response operations is needed to achieve high agility in mobilization of emergency supplies in the immediate aftermath of the event. This paper is an attempt in that direction, and it provides decision makers with crucial strategic insights for a more effective network design for disaster response. Decision sciences and ICT are integrated to analyse the robustness and resilience of a prepositioned network of emergency strategic stockpiles for a real-life case about Indonesia, one of the most vulnerable countries in Asia-Pacific, with the model being built upon a rich set of quantitative data. At this aim, a network optimization approach was implemented, with several what-if scenarios being accurately developed and tested. Findings of this study are able to support decision makers facing challenges related with disaster relief chains resilience, particularly about optimal configuration of supply chain facilities and optimal flows across the nodes, while considering the network structure from an end-to-end in-country distribution perspective.Keywords: disaster preparedness, humanitarian logistics, network optimization, resilience
Procedia PDF Downloads 1722990 A Context Aware Mobile Learning System with a Cognitive Recommendation Engine
Authors: Jalal Maqbool, Gyu Myoung Lee
Abstract:
Using smart devices for context aware mobile learning is becoming increasingly popular. This has led to mobile learning technology becoming an indispensable part of today’s learning environment and platforms. However, some fundamental issues remain - namely, mobile learning still lacks the ability to truly understand human reaction and user behaviour. This is due to the fact that current mobile learning systems are passive and not aware of learners’ changing contextual situations. They rely on static information about mobile learners. In addition, current mobile learning platforms lack the capability to incorporate dynamic contextual situations into learners’ preferences. Thus, this thesis aims to address these issues highlighted by designing a context aware framework which is able to sense learner’s contextual situations, handle data dynamically, and which can use contextual information to suggest bespoke learning content according to a learner’s preferences. This is to be underpinned by a robust recommendation system, which has the capability to perform these functions, thus providing learners with a truly context-aware mobile learning experience, delivering learning contents using smart devices and adapting to learning preferences as and when it is required. In addition, part of designing an algorithm for the recommendation engine has to be based on learner and application needs, personal characteristics and circumstances, as well as being able to comprehend human cognitive processes which would enable the technology to interact effectively and deliver mobile learning content which is relevant, according to the learner’s contextual situations. The concept of this proposed project is to provide a new method of smart learning, based on a capable recommendation engine for providing an intuitive mobile learning model based on learner actions.Keywords: aware, context, learning, mobile
Procedia PDF Downloads 2442989 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 424