Search results for: nonlinear dynamic model
2773 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling
Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines
Abstract:
The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances
Procedia PDF Downloads 1552772 Relationship Between Muscle Mass and Insulin Resistance in Cirrhotic Patients with Hepatitis B
Authors: Eyüp S. Akbas, Betul Ayaz, Beyza S. Haksever, Sema Basat
Abstract:
We aimed to evaluate the relationship between insulin resistance, muscle mass and muscle strength in patients with Hepatitis B virus-related cirrhosis. In our study, there were 65 patients with hepatitis B virus-related cirrhosis in Child A and B group and 65 healthy control individual. Control group was chosen between patients who admitted to the internal medicine clinic and had no pathological values in a routine examination. Muscle mass index was calculated with bioimpedance analysis for both groups to determine muscle strength and muscle mass. Handgrip strength, arm, and calf circumference were measured. In both groups, HOMA-IR was calculated to determine insulin resistance. Homeostatic Model Assessment of Insulin Resistance (HOMA-IR) value was detected 3,47±3,80 in the study group and 1,83±1,20 in control group. There were significant differences between the two groups in arm circumference, fasting insulin, fasting glucose, HOMA-IR, High-density lipoprotein (HDL) and total cholesterol parameters. The correlation coefficient between muscle mass and insulin resistance was statistically insignificant, especially in the study group. In healthy individuals group and all the groups, there wasn’t a correlation between muscle mass and insulin resistance. The upper limit for HOMA-IR was determined as 3,2. In control group, %78,9 of individuals were in HOMA-IR ( < 3.2) group and %21,1 of them were in ( ≥ 3,2) group. In study group, %68,3 of individuals were in HOMA-IR ( < 3,2) group and %31.7 were in HOMA-IR ( ≥ 3,2) group. In our study, we did not find a relationship between muscle mass and insulin resistance in patients with liver cirrhosis. In the study group, we detected a positive relationship between muscle mass, handgrip strength, and calf circumference. We did not find a relationship between insulin resistance and handgrip strength in our study.Keywords: cirrhosis, hepatitis B, Insulin resistance, muscle mass
Procedia PDF Downloads 1512771 Barnard Feature Point Detector for Low-Contractperiapical Radiography Image
Authors: Chih-Yi Ho, Tzu-Fang Chang, Chih-Chia Huang, Chia-Yen Lee
Abstract:
In dental clinics, the dentists use the periapical radiography image to assess the effectiveness of endodontic treatment of teeth with chronic apical periodontitis. Periapical radiography images are taken at different times to assess alveolar bone variation before and after the root canal treatment, and furthermore to judge whether the treatment was successful. Current clinical assessment of apical tissue recovery relies only on dentist personal experience. It is difficult to have the same standard and objective interpretations due to the dentist or radiologist personal background and knowledge. If periapical radiography images at the different time could be registered well, the endodontic treatment could be evaluated. In the image registration area, it is necessary to assign representative control points to the transformation model for good performances of registration results. However, detection of representative control points (feature points) on periapical radiography images is generally very difficult. Regardless of which traditional detection methods are practiced, sufficient feature points may not be detected due to the low-contrast characteristics of the x-ray image. Barnard detector is an algorithm for feature point detection based on grayscale value gradients, which can obtain sufficient feature points in the case of gray-scale contrast is not obvious. However, the Barnard detector would detect too many feature points, and they would be too clustered. This study uses the local extrema of clustering feature points and the suppression radius to overcome the problem, and compared different feature point detection methods. In the preliminary result, the feature points could be detected as representative control points by the proposed method.Keywords: feature detection, Barnard detector, registration, periapical radiography image, endodontic treatment
Procedia PDF Downloads 4422770 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies
Authors: Jaballah Jamil
Abstract:
KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.Keywords: KLD social rating agency, investors' perception, investment decision, financial performance
Procedia PDF Downloads 4392769 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning
Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah
Abstract:
Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning
Procedia PDF Downloads 322768 The Population Death Model and Influencing Factors from the Data of The "Sixth Census": Zhangwan District Case Study
Authors: Zhou Shangcheng, Yi Sicen
Abstract:
Objective: To understand the mortality patterns of Zhangwan District in 2010 and provide the basis for the development of scientific and rational health policy. Methods: Data are collected from the Sixth Census of Zhangwan District and disease surveillance system. The statistical analysis include death difference between age, gender, region and time and the related factors. Methods developed for the Global Burden of Disease (GBD) Study by the World Bank and World Health Organization (WHO) were adapted and applied to Zhangwan District population health data. DALY rate per 1,000 was calculated for varied causes of death. SPSS 16 is used by statistic analysis. Results: From the data of death population of Zhangwan District we know the crude mortality rate was 6.03 ‰. There are significant differences of mortality rate in male and female population which was respectively 7.37 ‰ and 4.68 ‰. 0 age group population life expectancy in Zhangwan District in 2010 was 78.40 years old(Male 75.93, Female 81.03). The five leading causes of YLL in descending order were: cardiovascular diseases(42.63DALY/1000), malignant neoplasm (23.73DALY/1000), unintentional injuries (5.84DALY/1000), Respiratory diseases(5.43 DALY/1000), Respiratory infections (2.44DALY/1000). In addition, there are strong relation between the marital status , educational level and mortality in some to a certain extend. Conclusion Zhangwan District, as city level, is at lower mortality levels. The mortality of the total population of Zhangwan District has a downward trend and life expectancy is rising.Keywords: sixth census, Zhangwan district, death level differences, influencing factors, cause of death
Procedia PDF Downloads 2702767 Fractal Nature of Granular Mixtures of Different Concretes Formulated with Different Methods of Formulation
Authors: Fatima Achouri, Kaddour Chouicha, Abdelwahab Khatir
Abstract:
It is clear that concrete of quality must be made with selected materials chosen in optimum proportions that remain after implementation, a minimum of voids in the material produced. The different methods of formulations what we use, are based for the most part on a granular curve which describes an ‘optimal granularity’. Many authors have engaged in fundamental research on granular arrangements. A comparison of mathematical models reproducing these granular arrangements with experimental measurements of compactness have to verify that the minimum porosity P according to the following extent granular exactly a power law. So the best compactness in the finite medium are obtained with power laws, such as Furnas, Fuller or Talbot, each preferring a particular setting between 0.20 and 0.50. These considerations converge on the assumption that the optimal granularity Caquot approximates by a power law. By analogy, it can then be analyzed as a granular structure of fractal-type since the properties that characterize the internal similarity fractal objects are reflected also by a power law. Optimized mixtures may be described as a series of installments falling granular stuff to better the tank on a regular hierarchical distribution which would give at different scales, by cascading effects, the same structure to the mix. Likely this model may be appropriate for the entire extent of the size distribution of the components, since the cement particles (and silica fume) correctly deflocculated, micrometric dimensions, to chippings sometimes several tens of millimeters. As part of this research, the aim is to give an illustration of the application of fractal analysis to characterize the granular concrete mixtures optimized for a so-called fractal dimension where different concretes were studying that we proved a fractal structure of their granular mixtures regardless of the method of formulation or the type of concrete.Keywords: concrete formulation, fractal character, granular packing, method of formulation
Procedia PDF Downloads 2592766 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures
Authors: Milad Abbasi
Abstract:
Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network
Procedia PDF Downloads 1532765 Body Mass Components in Young Soccer Players
Authors: Elizabeta Sivevska, Sunchica Petrovska, Vaska Antevska, Lidija Todorovska, Sanja Manchevska, Beti Dejanova, Ivanka Karagjozova, Jasmina Pluncevic Gligoroska
Abstract:
Introduction: Body composition plays an important role in the selection of young soccer players and it is associated with their successful performance. The most commonly used model of body composition divides the body into two compartments: fat components and fat-free mass (muscular and bone components). The aims of the study were to determine the body composition parameters of young male soccer players and to show the differences in age groups. Material and methods: A sample of 52 young male soccer players, with an age span from 9 to 14 years were divided into two groups according to the age (group 1 aged 9 to 12 years and group 2 aged 12 to 14 years). Anthropometric measurements were taken according to the method of Mateigka. The following measurements were made: body weight, body height, circumferences (arm, forearm, thigh and calf), diameters (elbow, knee, wrist, ankle) and skinfold thickness (biceps, triceps, thigh, leg, chest, abdomen). The measurements were used in Mateigka’s equations. Results: Body mass components were analyzed as absolute values (in kilograms) and as percentage values: the muscular component (MC kg and MC%), the bone component (BCkg and BC%) and the body fat (BFkg and BF%). The group up to 12 years showed the following mean values of the analyzed parameters: MM=21.5kg; MM%=46.3%; BC=8.1kg; BC%=19.1%; BF= 6.3kg; BF%= 15.7%. The second group aged 12-14 year had mean values of body composition parameters as follows: MM=25.6 kg; MM%=48.2%; BC = 11.4 kg; BC%=21.6%; BF= 8.5 kg; BF%= 14. 7%. Conclusions: The young soccer players aged 12 up to 14 years who are in the pre-pubertal phase of growth and development had higher bone component (p<0.05) compared to younger players. There is no significant difference in muscular and fat body component between the two groups of young soccer players.Keywords: body composition, young soccer players, body fat, fat-free mass
Procedia PDF Downloads 4582764 A Study of Body Weight and Type Traits Recorded on Hairy Goat in Punjab, Pakistan
Authors: A. Qayyum, G. Bilal, H. M. Waheed
Abstract:
The objectives of the study were to determine phenotypic variations in Hairy goats for quantitative and qualitative traits and to analyze the relationship between different body measurements and body weight in Hairy goats. Data were collected from the Barani Livestock Production Research Institute (BLPRI) at Kherimurat, Attock and potential farmers who were raising hairy goats in the Potohar region. Twelve (12) phenotypic parameters were measured on 99 adult Hairy goat (18 male and 81 female). Four qualitative and 8 quantitative traits were investigated. Qualitative traits were visually observed and expressed as percentages. Descriptive analysis was done on quantitative variables. All hairy goats had predominately black body coat color (72%), whereas white (11%) and brown (11%) body coat color were also observed. Both the pigmented (45.5%) and non-pigmented (54.5%) type of body skin were observed in the goat breed. Horns were present in the majority (91%) of animals. Most of the animals (83%) had straight facial head profiles. Analysis was performed in SAS On-Demand for Academics using PROC mixed model procedure. Overall means ± SD of body weight (BW), body length (BL), height at wither (HAW), ear length (EL), head length (HL), heart girth (HG), tail length (TL) and MC (muzzle circumference) were 41.44 ± 12.21 kg, 66.40 ± 7.87 cm, 75.17 ± 7.83 cm, 22.99 ± 6.75 cm, 15.07 ± 3.44 cm, 76.54 ± 8.80 cm, 18.28 ± 4.18 cm, and 26.24 ± 5.192 cm, respectively. Sex had a significant effect on BL and HG (P < 0.05), whereas BW, HAW, EL, HL, TL, and MC were not significantly affected (P > 0.05). The herd had a significant effect on BW, BL, HAW, HL, HG, and TL (P < 0.05) except EL and MC (P > 0.05). Hairy goats appear to have the potential for selection as mutton breeds in the Potohar region of Punjab. The findings of the present study would help in the characterization and conservation of hairy goats using genetic and genomic tools in the future.Keywords: body weight, Hairy goat, type traits Punjab, Pakistan
Procedia PDF Downloads 662763 The Effect of an Infill on the Bearing Capacity and Stiffness of Infilled Frames
Authors: Goran Baloevic, Jure Radnic, Nikola Grgic
Abstract:
The application of frames with masonry or panel infill is common in the engineering practice. In these cases, a frame is often considered to be a primary structure, while an infill is considered to be a secondary structure. In past calculations, the infill was rarely included in the design of frame structures in terms of their bearing capacity and safety. Recent calculations of such structures necessarily include the effect of infill since it contributes to stiffness and bearing capacity of overall system, especially under horizontal loads. In certain cases, if the infill is not included in the seismic design of frame structures, the result can be lower design safety. However, since the different configuration of the infill through the building’s height can be made, it is possible that contribution of such infill to the overall bearing capacity can be lower and seismic forces on the building can be increased due to greater stiffness of the structure. So far, many experimental and numerical researches on the behavior of infilled frames under horizontal static forces and earthquake have been performed. In this paper, several masonry-infilled concrete and steel frames under horizontal static forces and earthquake are analysed. The experimental results by shake-table and numerical results are compared in terms of the bearing capacity of bare and infilled frames. Herein, the stiffness of frames and infill were varied, with different position of the infill and different types of openings. Cases with positive and negative effects of the infill to the bearing capacity of the frames were considered. Finally, main conclusions and recommendations for practical application and design of masonry-infilled concrete and steel frames are given.Keywords: bearing capacity, infilled frame, numerical model, shake table
Procedia PDF Downloads 4642762 Modified Clusterwise Regression for Pavement Management
Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella
Abstract:
Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.Keywords: clusterwise regression, pavement management system, performance model, optimization
Procedia PDF Downloads 2512761 The Essence of Culture and Religion in Creating Disaster Resilient Societies through Corporate Social Responsibility
Authors: Repaul Kanji, Rajat Agrawal
Abstract:
In this era where issues like climate change and disasters are the topics of discussion at national and international forums, it is very often that humanity questions the causative role of corporates in such events. It is beyond any doubt that rapid industrialisation and development has taken a toll in the form of climate change and even disasters, in some case. Thus, demanding to fulfill a corporate's responsibilities in the form of rescue and relief in times of disaster, rehabilitation and even mitigation and preparedness to adapt to the oncoming changes is obvious. But how can the responsibilities of the corporates be channelised to ensure all this, i.e., develop a resilient society? More than that, which factors, when emphasised upon, can lead to the holistic development of the society. To answer this query, an extensive literature review was done to identify several enablers like legislations of a nation, the role of brand and reputation, ease of doing Corporate Social Responsibility, mission and vision of an organisation, religion and culture, etc. as a tool for building disaster resilience. A questionnaire survey, interviews with experts and academicians followed by interpretive structural modelling (ISM) were used to construct a multi-hierarchy model depicting the contextual relationship among the identified enablers. The study revealed that culture and religion are the most powerful driver, which affects other enablers either directly or indirectly. Taking cognisance of the fact that an idea of separation between religion and workplace (business) resides subconsciously within the society, the study tries to interpret the outcome of the ISM through the lenses of past researches (The Integrating Box) and explores how it can be leveraged to build a resilient society.Keywords: corporate social responsibility, interpretive structural modelling, disaster resilience and risk reduction, the integration box (TIB)
Procedia PDF Downloads 2092760 Research Progress of the Relationship between Urban Rail Transit and Residents' Travel Behavior during 1999-2019: A Scientific Knowledge Mapping Based on Citespace and Vosviewer
Authors: Zheng Yi
Abstract:
Among the attempts made worldwide to foster urban and transport sustainability, transit-oriented development certainly is one of the most successful. Residents' travel behavior is a concern in the researches about the impacts of transit-oriented development. The study takes 620 English journal papers in the core collection database of Web of Science as the study objects; the paper tries to map out the scientific knowledge mapping in the field and draw the basic conditions by co-citation analysis, co-word analysis, a total of citation network analysis and visualization techniques. This study teases out the research hotspots and evolution of the relationship between urban rail transit and resident's travel behavior from 1999 to 2019. According to the results of the analysis of the time-zone view and burst-detection, the paper discusses the trend of the next stage of international study. The results show that in the past 20 years, the research focuses on these keywords: land use, behavior, model, built environment, impact, travel behavior, walking, physical activity, smart card, big data, simulation, perception. According to different research contents, the key literature is further divided into these topics: the attributes of the built environment, land use, transportation network, transportation policies. The results of this paper can help to understand the related researches and achievements systematically. These results can also provide a reference for identifying the main challenges that relevant researches need to address in the future.Keywords: urban rail transit, travel behavior, knowledge map, evolution of researches
Procedia PDF Downloads 1102759 Role of Transient Receptor Potential Vanilloid 1 in Electroacupuncture Analgesia on Chronic Inflammatory Pain in Mice
Authors: Jun Yang, Ching-Liang Hsieh, Yi-Wen Lin
Abstract:
Chronic inflammatory pain results from peripheral tissue injury or local inflammation to increase the release of protons, histamines, adenosine triphosphate, and several proinflammatory cytokines. Transient receptor potential vanilloid 1 (TRPV1) is involved in fibromyalgia, neuropathic, and inflammatory pain; however, its exact mechanisms in chronic inflammatory pain are still unclear. We investigate the analgesic effect of EA by injecting complete Freund’s adjuvant (CFA) in the hind paw of mice to induce chronic inflammatory pain ( > 14 d). Our results showed that EA significantly reduced chronic mechanical and thermal hyperalgesia in the chronic inflammatory pain model. Chronic mechanical and thermal hyperalgesia was also abolished in TRPV1−/− mice. TRPV1 increased in the dorsal root ganglion (DRG) and spinal cord (SC) at 2 weeks after CFA injection. The expression levels of downstream molecules such as pPKA, pPI3K, and pPKC increased, as did those of pERK, pp38, and pJNK. Transcription factors (pCREB and pNFκB) and nociceptive ion channels (Nav1.7 and Nav1.8) were involved in this process. Inflammatory mediators such as GFAP (Glial fibrillary acidic protein), S100B, and RAGE (Receptor for advanced glycation endproducts) were also involved. The expression levels of these molecules were reduced in EA (electroacupuncture) and TRPV1−/−mice but not in the sham EA group. The present study demonstrated that EA or TRPV1 gene deletion reduced chronic inflammatory pain through TRPV1 and related molecules. In addition, our data provided evidence to support the clinical use of EA for treating chronic inflammatory pain.Keywords: auricular electric-stimulation, epileptic seizures, anti-inflammation, electroacupuncture
Procedia PDF Downloads 1762758 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis
Authors: Syed Toqueer Akhter, Hussain Hamid
Abstract:
Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model
Procedia PDF Downloads 10292757 DNA Damage and Apoptosis Induced in Drosophila melanogaster Exposed to Different Duration of 2400 MHz Radio Frequency-Electromagnetic Fields Radiation
Authors: Neha Singh, Anuj Ranjan, Tanu Jindal
Abstract:
Over the last decade, the exponential growth of mobile communication has been accompanied by a parallel increase in density of electromagnetic fields (EMF). The continued expansion of mobile phone usage raises important questions as EMF, especially radio frequency (RF), have long been suspected of having biological effects. In the present experiments, we studied the effects of RF-EMF on cell death (apoptosis) and DNA damage of a well- tested biological model, Drosophila melanogaster exposed to 2400 MHz frequency for different time duration i.e. 2 hrs, 4 hrs, 6 hrs,8 hrs, 10 hrs, and 12 hrs each day for five continuous days in ambient temperature and humidity conditions inside an exposure chamber. The flies were grouped into control, sham-exposed, and exposed with 100 flies in each group. In this study, well-known techniques like Comet Assay and TUNEL (Terminal deoxynucleotide transferase dUTP Nick End Labeling) Assay were used to detect DNA damage and for apoptosis studies, respectively. Experiments results showed DNA damage in the brain cells of Drosophila which increases as the duration of exposure increases when observed under the observed when we compared results of control, sham-exposed, and exposed group which indicates that EMF radiation-induced stress in the organism that leads to DNA damage and cell death. The process of apoptosis and mutation follows similar pathway for all eukaryotic cells; therefore, studying apoptosis and genotoxicity in Drosophila makes similar relevance for human beings as well.Keywords: cell death, apoptosis, Comet Assay, DNA damage, Drosophila, electromagnetic fields, EMF, radio frequency, RF, TUNEL assay
Procedia PDF Downloads 1692756 Typical Characteristics and Compositions of Solvent System in Application of Maceration Technology to Isolate Antioxidative Activated Extract of Natural Products
Authors: Yohanes Buang, Suwari
Abstract:
Increasing interest of society in use and creation of herbal medicines has encouraged scientists/researchers to establish an ideal method to produce the best quality and quantity of pharmaceutical extracts. To have highest the antioxidative extracts, the method used must be at optimum conditions. Hence, the best method is not only able to provide highest quantity and quality of the isolated pharmaceutical extracts but also it has to be easy to do, simple, fast, and cheap. The characterization of solvents in maceration technique, in present study, involved various variables influencing quantity and quality of the pharmaceutical extracts, such as solvent’s optimum acidity-alkalinity (pH), temperature, concentration, and contact time. The shifting polarity of the solvent by combinations of water with ethanol (70:30) and (50:50) were also performed to completely record the best solvent system in application of maceration technology. Among those three solvents threated within Myrmecodia pendens, as a model of natural product, the results showed that water solvent system with conditions of alkalinity pH, optimum temperature, concentration, and contact time, is the best system to perform the maceration in order to have the highest isolated antioxidative activated extracts. The optimum conditions of the water solvent are at the alkalinity pH 9 up, 30 mg/mL of concentration, 40 min of contact time, 100 °C of temperature, and no ethanol used to replace parts of the water solvent. The present study strongly recommended the best conditions of solvent system to isolate the pharmaceutical extracts of natural products in application of the maceration technology.Keywords: extracts, herbal medicine, natural product, maceration technique
Procedia PDF Downloads 2992755 Lithium Ion Supported on TiO2 Mixed Metal Oxides as a Heterogeneous Catalyst for Biodiesel Production from Canola Oil
Authors: Mariam Alsharifi, Hussein Znad, Ming Ang
Abstract:
Considering the environmental issues and the shortage in the conventional fossil fuel sources, biodiesel has gained a promising solution to shift away from fossil based fuel as one of the sustainable and renewable energy. It is synthesized by transesterification of vegetable oils or animal fats with alcohol (methanol or ethanol) in the presence of a catalyst. This study focuses on synthesizing a high efficient Li/TiO2 heterogeneous catalyst for biodiesel production from canola oil. In this work, lithium immobilized onto TiO2 by the simple impregnation method. The catalyst was evaluated by transesterification reaction in a batch reactor under moderate reaction conditions. To study the effect of Li concentrations, a series of LiNO3 concentrations (20, 30, 40 wt. %) at different calcination temperatures (450, 600, 750 ºC) were evaluated. The Li/TiO2 catalysts are characterized by several spectroscopic and analytical techniques such as XRD, FT-IR, BET, TG-DSC and FESEM. The optimum values of impregnated Lithium nitrate on TiO2 and calcination temperature are 30 wt. % and 600 ºC, respectively, along with a high conversion to be 98 %. The XRD study revealed that the insertion of Li improved the catalyst efficiency without any alteration in structure of TiO2 The best performance of the catalyst was achieved when using a methanol to oil ratio of 24:1, 5 wt. % of catalyst loading, at 65◦C reaction temperature for 3 hours of reaction time. Moreover, the experimental kinetic data were compatible with the pseudo-first order model and the activation energy was (39.366) kJ/mol. The synthesized catalyst Li/TiO2 was applied to trans- esterify used cooking oil and exhibited a 91.73% conversion. The prepared catalyst has shown a high catalytic activity to produce biodiesel from fresh and used oil within mild reaction conditions.Keywords: biodiesel, canola oil, environment, heterogeneous catalyst, impregnation method, renewable energy, transesterification
Procedia PDF Downloads 1762754 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces
Authors: Shweta Singh, Sudaman Katti
Abstract:
The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity
Procedia PDF Downloads 1362753 An Assessment into Impact of Regional Conflicts upon Socio-Political Sustainability in Pakistan
Authors: Syed Toqueer Akhter, Muhammad Muzaffar Abbas
Abstract:
Conflicts in Pakistan are a result of a configuration of factors, which are directly related to the system of the state, the unstable regional setting, and the geo-strategic location of Pakistan at large. This paper examines the impact of regional conflict onto the socio-political sustainability of Pakistan. The magnitude of the spillover from a conflicted region is similar in size of the equivalent increase in domestic conflict. Pakistan has gone at war three times with India; the border with India is named as the tensest borderlines of the world. Disagreements with India and lack of dispute settlement mechanisms have negatively effected the peace in the region, influx of illegal weapons and refugees from Afghanistan as an outcome of 9/11 incidence, have exasperated the criticality of levels of internal conflict in Pakistan. Our empirical findings are based on the data collected on regional conflict levels, regional trade, global trade, comparative defence capabilities of the region in contrast to Pakistan and the government regime (Autocratic, Democratic) over 1972-2007. It has been proposed in this paper that the intent of domestic conflict is associated with the conflict in the region, regional trade, global trade and the government regime of Pakistan. The estimated model (OLS) implies that domestic conflict is effected positively and significantly with long term impact of conflict in the region. Also, if defence capabilities of the region are better than that of Pakistan it effects domestic conflict positively and significantly. Conflict in neighbouring countries are found as a source of domestic conflict in Pakistan, whereas the regional trade as well as type of government regimes in Pakistan lowered the intensity of domestic conflict significantly, while globalized trade imply risk of domestic conflict to be reduced but not significantly.Keywords: conflict, regional trade, socio-politcal instability
Procedia PDF Downloads 3212752 Research on Level Adjusting Mechanism System of Large Space Environment Simulator
Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng
Abstract:
Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism
Procedia PDF Downloads 2472751 Application of a Compact Wastewater Treatment Unit in a Rural Area
Authors: Mohamed El-Khateeb
Abstract:
Encompassing inventory, warehousing, and transportation management, logistics is a crucial predictor of firm performance. This has been extensively proven by extant literature in business and operations management. Logistics is also a fundamental determinant of a country's ability to access international markets. Available studies in international and transport economics have shown that limited transport infrastructure and underperforming transport services can severely affect international competitiveness. However, the evidence lacks the overall impact of logistics performance-encompassing all inventory, warehousing, and transport components- on global trade. In order to fill this knowledge gap, the paper uses a gravitational trade model with 155 countries from all geographical regions between 2007 and 2018. Data on logistics performance is obtained from the World Bank's Logistics Performance Index (LPI). First, the relationship between logistics performance and a country’s total trade is estimated, followed by a breakdown by the economic sector. Then, the analysis is disaggregated according to the level of technological intensity of traded goods. Finally, after evaluating the intensive margin of trade, the relevance of logistics infrastructure and services for the extensive trade margin is assessed. Results suggest that: (i) improvements in both logistics infrastructure and services are associated with export growth; (ii) manufactured goods can significantly benefit from these improvements, especially when both exporting and importing countries increase their logistics performance; (iii) the quality of logistics infrastructure and services becomes more important as traded goods are technology-intensive; and (iv) improving the exporting country's logistics performance is essential in the intensive margin of trade while enhancing the importing country's logistics performance is more relevant in the extensive margin.Keywords: low-cost, recycling, reuse, solid waste, wastewater treatment
Procedia PDF Downloads 1972750 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City
Authors: Christian Kapuku, Seung-Young Kho
Abstract:
An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.Keywords: geographic information system (GIS), network construction, transportation database, open source data
Procedia PDF Downloads 1672749 The Effect of Corporate Governance on Financial Stability and Solvency Margin for Insurance Companies in Jordan
Authors: Ghadeer A.Al-Jabaree, Husam Aldeen Al-Khadash, M. Nassar
Abstract:
This study aimed at investigating the effect of well-designed corporate governance system on the financial stability of insurance companies listed in ASE. Further, this study provides a comprehensive model for evaluating and analyzing insurance companies' financial position and prospective for comparing the degree of corporate governance application provisions among Jordanian insurance companies. In order to achieve the goals of the study, a whole population that consist of (27) listed insurance companies was introduced through the variables of (board of director, audit committee, internal and external auditor, board and management ownership and block holder's identities). Statistical methods were used with alternative techniques by (SPSS); where descriptive statistical techniques such as means, standard deviations were used to describe the variables, while (F) test and ANOVA analysis of variance were used to test the hypotheses of the study. The study revealed the existence of significant effect of corporate governance variables except local companies that are not listed in ASE on financial stability within control variables especially debt ratio (leverage),where it's also showed that concentration in motor third party doesn't have significant effect on insurance companies' financial stability during study period. Moreover, the study concludes that Global financial crisis affect the investment side of insurance companies with insignificant effect on the technical side. Finally, some recommendations were presented such as enhancing the laws and regulation that help the appropriate application of corporate governance, and work on activating the transparency in the disclosures of the financial statements and focusing on supporting the technical provisions for the companies, rather than focusing only on profit side.Keywords: corporate governance, financial stability and solvency margin, insurance companies, Jordan
Procedia PDF Downloads 4892748 Experimental Monitoring of the Parameters of the Ionosphere in the Local Area Using the Results of Multifrequency GNSS-Measurements
Authors: Andrey Kupriyanov
Abstract:
In recent years, much attention has been paid to the problems of ionospheric disturbances and their influence on the signals of global navigation satellite systems (GNSS) around the world. This is due to the increase in solar activity, the expansion of the scope of GNSS, the emergence of new satellite systems, the introduction of new frequencies and many others. The influence of the Earth's ionosphere on the propagation of radio signals is an important factor in many applied fields of science and technology. The paper considers the application of the method of transionospheric sounding using measurements from signals from Global Navigation Satellite Systems to determine the TEC distribution and scintillations of the ionospheric layers. To calculate these parameters, the International Reference Ionosphere (IRI) model of the ionosphere, refined in the local area, is used. The organization of operational monitoring of ionospheric parameters is analyzed using several NovAtel GPStation6 base stations. It allows performing primary processing of GNSS measurement data, calculating TEC and fixing scintillation moments, modeling the ionosphere using the obtained data, storing data and performing ionospheric correction in measurements. As a result of the study, it was proved that the use of the transionospheric sounding method for reconstructing the altitude distribution of electron concentration in different altitude range and would provide operational information about the ionosphere, which is necessary for solving a number of practical problems in the field of many applications. Also, the use of multi-frequency multisystem GNSS equipment and special software will allow achieving the specified accuracy and volume of measurements.Keywords: global navigation satellite systems (GNSS), GPstation6, international reference ionosphere (IRI), ionosphere, scintillations, total electron content (TEC)
Procedia PDF Downloads 1812747 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score
Procedia PDF Downloads 1342746 A Network Optimization Study of Logistics for Enhancing Emergency Preparedness in Asia-Pacific
Authors: Giuseppe Timperio, Robert De Souza
Abstract:
The combination of factors such as temperamental climate change, rampant urbanization of risk exposed areas, political and social instabilities, is posing an alarming base for the further growth of number and magnitude of humanitarian crises worldwide. Given the unique features of humanitarian supply chain such as unpredictability of demand in space, time, and geography, spike in the number of requests for relief items in the first days after the calamity, uncertain state of logistics infrastructures, large volumes of unsolicited low-priority items, a proactive approach towards design of disaster response operations is needed to achieve high agility in mobilization of emergency supplies in the immediate aftermath of the event. This paper is an attempt in that direction, and it provides decision makers with crucial strategic insights for a more effective network design for disaster response. Decision sciences and ICT are integrated to analyse the robustness and resilience of a prepositioned network of emergency strategic stockpiles for a real-life case about Indonesia, one of the most vulnerable countries in Asia-Pacific, with the model being built upon a rich set of quantitative data. At this aim, a network optimization approach was implemented, with several what-if scenarios being accurately developed and tested. Findings of this study are able to support decision makers facing challenges related with disaster relief chains resilience, particularly about optimal configuration of supply chain facilities and optimal flows across the nodes, while considering the network structure from an end-to-end in-country distribution perspective.Keywords: disaster preparedness, humanitarian logistics, network optimization, resilience
Procedia PDF Downloads 1762745 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 4272744 Polymeric Microspheres for Bone Tissue Engineering
Authors: Yamina Boukari, Nashiru Billa, Andrew Morris, Stephen Doughty, Kevin Shakesheff
Abstract:
Poly (lactic-co-glycolic) acid (PLGA) is a synthetic polymer that can be used in bone tissue engineering with the aim of creating a scaffold in order to support the growth of cells. The formation of microspheres from this polymer is an attractive strategy that would allow for the development of an injectable system, hence avoiding invasive surgical procedures. The aim of this study was to develop a microsphere delivery system for use as an injectable scaffold in bone tissue engineering and evaluate various formulation parameters on its properties. Porous and lysozyme-containing PLGA microspheres were prepared using the double emulsion solvent evaporation method from various molecular weights (MW). Scaffolds were formed by sintering to contain 1 -3mg of lysozyme per gram of scaffold. The mechanical and physical properties of the scaffolds were assessed along with the release of lysozyme, which was used as a model protein. The MW of PLGA was found to have an influence on microsphere size during fabrication, with increased MW leading to an increased microsphere diameter. An inversely proportional relationship was displayed between PLGA MW and mechanical strength of formed scaffolds across loadings for low, intermediate and high MW respectively. Lysozyme release from both microspheres and formed scaffolds showed an initial burst release phase, with both microspheres and scaffolds fabricated using high MW PLGA showing the lowest protein release. Following the initial burst phase, the profiles for each MW followed a similar slow release over 30 days. Overall, the results of this study demonstrate that lysozyme can be successfully incorporated into porous PLGA scaffolds and released over 30 days in vitro, and that varying the MW of the PLGA can be used as a method of altering the physical properties of the resulting scaffolds.Keywords: bone, microspheres, PLGA, tissue engineering
Procedia PDF Downloads 425