Search results for: certificateless aggregate signature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 731

Search results for: certificateless aggregate signature

191 Effect of Permeability Reducing Admixture Utilization on Sulfate Resistance of Self-Consolidating Concrete Mixture

Authors: Ali Mardani-Aghabaglou, Zia Ahmad Faqiri, Semsi Yazici

Abstract:

In this study, the effect of permeability reducing admixture (PRA) utilization on fresh properties, compressive strength and sulfate resistance of self-consolidating concrete (SSC) were investigated. For this aim, two different commercial PRA were used at two utilization ratios as %0.1 and %0.2 wt. CEM I 42.5 R type cement and crushed limestone aggregate having Dmax of 15 mm were used for preparing of SCC mixtures. In all mixtures, cement content, water/cement ratio, and flow value were kept constant as 450 kg, 0.40 and 65 ± 2 cm, respectively. In order to obtain desired flow value, a polycarboxylate ether-based high range water reducing admixture was used at different content. T50 flow time, flow value, L-box, and U-funnel of SCC mixture were measured as fresh properties. 1, 3, 7 and 28-day compressive strength of SCC mixture were obtained on 150 mm cubic specimens. To investigate the sulfate resistance of SCC mixture 75x75x285 mm prismatic specimens were produced. After 28-day water curing, specimens were immersed in %5 sodium sulfate solution during 210 days. The length change of specimens was measured at 5-day time intervals up to 210 days. According to the test results, all fresh properties of SCC mixtures were in accordance with the European federation of specialist construction chemicals and concrete systems (EFNARC) critter for SCC mixtures. The utilization of PRA had no significant effect on compressive strength and fresh properties of SCC mixtures. Regardless of PRA type, sulfate resistance of SCC mixture increased by adding of PRA into the SCC mixtures. The length changes of the SCC mixtures containing %1 and %2 PRA were measured as %8 and %14 less than that of control mixture containing no PRA, respectively.

Keywords: permeability reducing admixture, self-consolidating concrete, fresh properties, sulfate resistance

Procedia PDF Downloads 138
190 Biogeography Based CO2 and Cost Optimization of RC Cantilever Retaining Walls

Authors: Ibrahim Aydogdu, Alper Akin

Abstract:

In this study, the development of minimizing the cost and the CO2 emission of the RC retaining wall design has been performed by Biogeography Based Optimization (BBO) algorithm. This has been achieved by developing computer programs utilizing BBO algorithm which minimize the cost and the CO2 emission of the RC retaining walls. Objective functions of the optimization problem are defined as the minimized cost, the CO2 emission and weighted aggregate of the cost and the CO2 functions of the RC retaining walls. In the formulation of the optimum design problem, the height and thickness of the stem, the length of the toe projection, the thickness of the stem at base level, the length and thickness of the base, the depth and thickness of the key, the distance from the toe to the key, the number and diameter of the reinforcement bars are treated as design variables. In the formulation of the optimization problem, flexural and shear strength constraints and minimum/maximum limitations for the reinforcement bar areas are derived from American Concrete Institute (ACI 318-14) design code. Moreover, the development length conditions for suitable detailing of reinforcement are treated as a constraint. The obtained optimum designs must satisfy the factor of safety for failure modes (overturning, sliding and bearing), strength, serviceability and other required limitations to attain practically acceptable shapes. To demonstrate the efficiency and robustness of the presented BBO algorithm, the optimum design example for retaining walls is presented and the results are compared to the previously obtained results available in the literature.

Keywords: bio geography, meta-heuristic search, optimization, retaining wall

Procedia PDF Downloads 379
189 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 262
188 Islamophobia, Years After 9/11: An Assessment of the American Media

Authors: Nasa'i Muhammad Gwadabe

Abstract:

This study seeks to find the extent to which the old Islamophobic prejudice was tilted towards a more negative direction in the United States following the 9/11 terrorist attacks. It is hypothesized that, the 9/11 attacks in the United States reshaped the old Islamophobic prejudice through the reinforcement of a strong social identity construction of Muslims as “out-group”. The “social identity” and “discourse representation” theories are used as framework for analysis. To test the hypothesis, two categories were created: the prejudice (out-group) and the tolerance (in-group) categories. The Prejudice (out-group) against Muslims category was coded to include six attributes: (Terrorist, Threat, Women's Rights violation, Undemocratic, Backward and Intolerant); while the tolerance (In-group) for Muslims category was also coded to include six attributes: (Peaceful, civilized, educated, partners trustworthy and honest). Data are generated from the archives of three American newspapers: The Los Angeles Times, New York Times and USA Today using specific search terms and specific date range; from 9/11/1996 to 9/11/2006, that is five years before and five years after the 9/11. An aggregate of 20595 articles were generated from the search of the three newspapers throughout the search periods. Conclusively, for both pre and post 9/11 periods, the articles generated under the category of prejudice (out-group) against Muslims revealed a higher frequency, against that of tolerance (in-group) for them, which is lesser. Finally, The comparison between the pre and post 9/11 periods showed that, the increased Prejudice (out-group) against Muslims was most influenced through libeling them as terrorist, which signaled a skyrocketed increase from pre to post 9/11.

Keywords: in-group, Islam, Islamophobia, Muslims, out-group, prejudice, terrorism, the 9/11 and tolerance

Procedia PDF Downloads 281
187 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks

Procedia PDF Downloads 212
186 Organic Carbon Pools Fractionation of Lacustrine Sediment with a Stepwise Chemical Procedure

Authors: Xiaoqing Liu, Kurt Friese, Karsten Rinke

Abstract:

Lacustrine sediment archives rich paleoenvironmental information in lake and surrounding environment. Additionally, modern sediment is used as an effective medium for the monitoring of lake. Organic carbon in sediment is a heterogeneous mixture with varying turnover times and qualities which result from the different biogeochemical processes in the deposition of organic material. Therefore, the isolation of different carbon pools is important for the research of lacustrine condition in the lake. However, the numeric available fractionation procedures can hardly yield homogeneous carbon pools on terms of stability and age. In this work, a multi-step fractionation protocol that treated sediment with hot water, HCl, H2O2 and Na2S2O8 in sequence was adopted, the treated sediment from each step were analyzed for the isotopic and structural compositions with Isotope Ratio Mass Spectrometer coupled with element analyzer (IRMS-EA) and Solid-state 13C Nuclear Magnetic Resonance (NMR), respectively. The sequential extractions with hot-water, HCl, and H2O2 yielded a more homogeneous and C3 plant-originating OC fraction, which was characterized with an atomic C/N ratio shift from 12.0 to 20.8, and 13C and 15N isotopic signatures were 0.9‰ and 1.9‰ more depleted than the original bulk sediment, respectively. Additionally, the H2O2- resistant residue was dominated with stable components, such as the lignins, waxes, cutans, tannins, steroids and aliphatic proteins and complex carbohydrates. 6M HCl in the acid hydrolysis step was much more effective than 1M HCl to isolate a sedimentary OC fraction with higher degree of homogeneity. Owing to the extremely high removal rate of organic matter, the step of a Na2S2O8 oxidation is only suggested if the isolation of the most refractory OC pool is mandatory. We conclude that this multi-step chemical fractionation procedure is effective to isolate more homogeneous OC pools in terms of stability and functional structure, and it can be used as a promising method for OC pools fractionation of sediment or soil in future lake research.

Keywords: 13C-CPMAS-NMR, 13C signature, lake sediment, OC fractionation

Procedia PDF Downloads 279
185 Hydraulic Conductivity Prediction of Cement Stabilized Pavement Base Incorporating Recycled Plastics and Recycled Aggregates

Authors: Md. Shams Razi Shopnil, Tanvir Imtiaz, Sabrina Mahjabin, Md. Sahadat Hossain

Abstract:

Saturated hydraulic conductivity is one of the most significant attributes of pavement base course. Determination of hydraulic conductivity is a routine procedure for regular aggregate base courses. However, in many cases, a cement-stabilized base course is used with compromised drainage ability. Traditional hydraulic conductivity testing procedure is a readily available option which leads to two consequential drawbacks, i.e., the time required for the specimen to be saturated and extruding the sample after completion of the laboratory test. To overcome these complications, this study aims at formulating an empirical approach to predicting hydraulic conductivity based on Unconfined Compressive Strength test results. To do so, this study comprises two separate experiments (Constant Head Permeability test and Unconfined Compressive Strength test) conducted concurrently on a specimen having the same physical credentials. Data obtained from the two experiments were then used to devise a correlation between hydraulic conductivity and unconfined compressive strength. This correlation in the form of a polynomial equation helps to predict the hydraulic conductivity of cement-treated pavement base course, bypassing the cumbrous process of traditional permeability and less commonly used horizontal permeability tests. The correlation was further corroborated by a different set of data, and it has been found that the derived polynomial equation is deemed to be a viable tool to predict hydraulic conductivity.

Keywords: hydraulic conductivity, unconfined compressive strength, recycled plastics, recycled concrete aggregates

Procedia PDF Downloads 66
184 3G or 4G: A Predilection for Millennial Generation of Indian Society

Authors: Rishi Prajapati

Abstract:

3G is the abbreviation of third generation of wireless mobile telecommunication technologies. 3G is a mode that finds application in wireless voice telephony, mobile internet access, fixed wireless internet access, video calls and mobile TV. It also provides mobile broadband access to smartphones and mobile modems in laptops and computers. The first 3G networks were introduced in 1998, followed by 4G networks in 2008. 4G is the abbreviation of fourth generation of wireless mobile telecommunication technologies. 4G is termed to be the advanced form of 3G. 4G was firstly introduced in South Korea in 2007. Many abstracts have floated researches that depicted the diversity and similarity between the third and the fourth generation of wireless mobile telecommunications technology, whereas this abstract reflects the study that focuses on analyzing the preference between 3G versus 4G given by the elite group of the Indian society who are known as adolescents or the Millennial Generation aging from 18 years to 25 years. The Millennial Generation was chosen for this study as they have the easiest access to the latest technology. A sample size of 200 adolescents was selected and a structured survey was carried out which had several closed ended as well as open ended questions, to aggregate the result of this study. It was made sure that the effect of environmental factors on the subjects was as minimal as possible. The data analysis comprised of primary data collection reflecting it as quantitative research. The rationale behind this research is to give brief idea of how 3G and 4G are accepted by the Millennial Generation in India. The findings of this research would materialize a framework which depicts whether Millennial Generation would prefer 4G over 3G or vice versa.

Keywords: fourth generation, wireless telecommunication technology, Indian society, millennial generation, market research, third generation

Procedia PDF Downloads 241
183 Households’ Willingness to Pay for Watershed Management Practices in Lake Hawassa Watershed, Southern Ethiopia

Authors: Mulugeta Fola, Mengistu Ketema, Kumilachew Alamerie

Abstract:

Watershed provides vast economic benefits within and beyond the management area of interest. But most watersheds in Ethiopia are increasingly facing the threats of degradation due to both natural and man-made causes. To reverse these problems, communities’ participation in sustainable management programs is among the necessary measures. Hence, this study assessed the households’ willingness to pay for watershed management practices through a contingent valuation study approach. Double bounded dichotomous choice with open-ended follow-up format was used to elicit the households’ willingness to pay. Based on data collected from 275 randomly selected households, descriptive statistics results indicated that most households (79.64%) were willing to pay for watershed management practices. A bivariate Probit model was employed to identify determinants of households’ willingness to pay and estimate mean willingness to pay. Its result shows that age, gender, income, livestock size, perception of watershed degradation, social position, and offered bids were important variables affecting willingness to pay for watershed management practices. The study also revealed that the mean willingness to pay for watershed management practices was calculated to be 58.41 Birr and 47.27 Birr per year from the double bounded and open-ended format, respectively. The study revealed that the aggregate welfare gains from watershed management practices were calculated to be 931581.09 Birr and 753909.23 Birr per year from double bounded dichotomous choice and open-ended format, respectively. Therefore, the policymakers should make households to pay for the services of watershed management practices in the study area.

Keywords: bivariate probit model, contingent valuation, watershed management practices, willingness to pay

Procedia PDF Downloads 198
182 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure. The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab. Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: friction, image analysis, polishing, statistical analysis, texture

Procedia PDF Downloads 288
181 Gene Expression Signature-Based Chemical Genomic to Identify Potential Therapeutic Compounds for Colorectal Cancer

Authors: Yen-Hao Su, Wan-Chun Tang, Ya-Wen Cheng, Peik Sia, Chi-Chen Huang, Yi-Chao Lee, Hsin-Yi Jiang, Ming-Heng Wu, I-Lu Lai, Jun-Wei Lee, Kuen-Haur Lee

Abstract:

There is a wide range of drugs and combinations under investigation and/or approved over the last decade to treat colorectal cancer (CRC), but the 5-year survival rate remains poor at stages II–IV. Therefore, new, more efficient drugs still need to be developed that will hopefully be included in first-line therapy or overcome resistance when it appears, as part of second- or third-line treatments in the near future. In this study, we revealed that heat shock protein 90 (Hsp90) inhibitors have high therapeutic potential in CRC according to combinative analysis of NCBI's Gene Expression Omnibus (GEO) repository and chemical genomic database of Connectivity Map (CMap). We found that second generation Hsp90 inhibitor, NVP-AUY922, significantly down regulated the activities of a broad spectrum of kinases involved in regulating cell growth arrest and death of NVPAUY922-sensitive CRC cells. To overcome NVP-AUY922-induced upregulation of survivin expression which causes drug insensitivity, we found that combining berberine (BBR), a herbal medicine with potency in inhibiting survivin expression, with NVP-AUY922 resulted in synergistic antiproliferative effects for NVP-AUY922-sensitive and -insensitive CRC cells. Furthermore, we demonstrated that treatment of NVP-AUY922-insensitive CRC cells with the combination of NVP-AUY922 and BBR caused cell growth arrest through inhibiting CDK4 expression and induction of microRNA-296-5p (miR-296-5p)-mediated suppression of Pin1–β-catenin–cyclin D1 signaling pathway. Finally, we found that the expression level of Hsp90 in tumor tissues of CRC was positively correlated with CDK4 and Pin1 expression levels. Taken together, these results indicate that combination of NVP-AUY922 and BBR therapy can inhibit multiple oncogenic signaling pathways of CRC.

Keywords: berberine, colorectal cancer, connectivity map, heat shock protein 90 inhibitor

Procedia PDF Downloads 289
180 Functionalized Titanium Dioxide Nanoparticles for Targeting and Disrupting Amyloid Fibrils

Authors: Elad Arad, Raz Jelinek, Hanna Rapaport

Abstract:

Amyloidoses are a family of diseases characterized by abnormal protein folding that leads to aggregation. They accumulate to form fibrillar plaques which are implicated in the pathogenesis of Alzheimer, prion, diabetes type II and other diseases. To the best of our knowledge, despite extensive research efforts devoted to plaque aggregates inhibition, there is yet no cure for this phenomenon. Titanium and its alloys are found in growing interest for biomedical applications. Variety of surface modifications enable porous, adhesive, bioactive coatings for its surface. Titanium oxides (titania) are also being developed for photothermal and photodynamic treatments. Inspired by this, we set to explore the effect of functionalized titania nanoparticles in combination with external stimuli, as potential photothermal ablating agents against amyloids. Titania nanoparticles were coated with bi-functional catechol derivatives (dihydroxy-phenylalanine propanoic acid, noted DPA) to gain targeting properties. In conjunction with UV-radiation, these nanoparticles may selectively destroy the vicinity of their target. Titania modified 5 nm nanoparticles coated with DPA were further conjugated to the amyloid-targeting Congo Red (CR). These Titania-DPA-CR nanoparticles were found to target mature amyloid fibril of both amyloid-β (Aβ 1-42 a.a). Moreover, irradiation of the peptides in presence of the modified nanoparticles decreased the aggregate content and oligomer fraction. This work provides insights into the use of modified titania nanoparticles for amyloid plaque targeting and photothermal destruction. It may shed light on future modifications and functionalization of titania nanoparticles for different applications.

Keywords: titanium dioxide, amyloids, photothermal treatment, catechol, Congo-red

Procedia PDF Downloads 126
179 Co-Alignment of Comfort and Energy Saving Objectives for U.S. Office Buildings and Restaurants

Authors: Lourdes Gutierrez, Eric Williams

Abstract:

Post-occupancy research shows that only 11% of commercial buildings met the ASHRAE thermal comfort standard. Many buildings are too warm in winter and/or too cool in summer, wasting energy and not providing comfort. In this paper, potential energy savings in U.S. offices and restaurants if thermostat settings are calculated according the updated ASHRAE 55-2013 comfort model that accounts for outdoor temperature and clothing choice for different climate zones. eQUEST building models are calibrated to reproduce aggregate energy consumption as reported in the U.S. Commercial Building Energy Consumption Survey. Changes in energy consumption due to the new settings are analyzed for 14 cities in different climate zones and then the results are extrapolated to estimate potential national savings. It is found that, depending on the climate zone, each degree increase in the summer saves 0.6 to 1.0% of total building electricity consumption. Each degree the winter setting is lowered saves 1.2% to 8.7% of total building natural gas consumption. With new thermostat settings, national savings are 2.5% of the total consumed in all office buildings and restaurants, summing up to national savings of 69.6 million GJ annually, comparable to all 2015 total solar PV generation in US. The goals of improved comfort and energy/economic savings are thus co-aligned, raising the importance of thermostat management as an energy efficiency strategy.

Keywords: energy savings quantifications, commercial building stocks, dynamic clothing insulation model, operation-focused interventions, energy management, thermal comfort, thermostat settings

Procedia PDF Downloads 285
178 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph

Authors: Zhifei Hu, Feng Xia

Abstract:

In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.

Keywords: graph attention network, knowledge graph, recommendation, information propagation

Procedia PDF Downloads 92
177 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 91
176 Soil Degradation Processes in Marginal Uplands of Samar Island, Philippines

Authors: Dernie Taganna Olguera

Abstract:

Marginal uplands are fragile ecosystems in the tropics that need to be evaluated for sustainable utilization and land degradation mitigation. Thus, this study evaluated the dominant soil degradation processes in selected marginal uplands of Samar Island, Philippines; evaluated the important factors influencing soil degradation in the selected sites and identified the indicators of soil degradation in marginal uplands of the tropical landscape of Samar Island, Philippines. Two (2) sites were selected (Sta. Rita, Samar and Salcedo, Eastern, Samar) representing the western and eastern sides of Samar Island respectively. These marginal uplands represent different agro-climatic zones suitable for the study. Soil erosion is the major soil degradation process in the marginal uplands studied. It resulted in not only considerable soil losses but nutrient losses as well. Soil erosion varied with vegetation cover and site. It was much higher in the sweetpotato, cassava, and gabi crops than under natural vegetation. In addition, soil erosion was higher in Salcedo than in Sta. Rita, which is related to climatic and soil characteristics. Bulk density, porosity, aggregate stability, soil pH, organic matter, and carbon dioxide evolution are good indicators of soil degradation. The dominance of Saccharum spontaneum Linn., Imperata cylindrica Linn, Melastoma malabathricum Linn. and Psidium guajava Linn indicated degraded soil condition. Farmer’s practices particularly clean culture and organic fertilizer application influenced the degree of soil degradation in the marginal uplands of Samar Island, Philippines.

Keywords: soil degradation, soil erosion, marginal uplands, Samar island, Philippines

Procedia PDF Downloads 389
175 HIV and STIs among Female Sex Workers in Ba Ria – Vung Tau, Vietnam

Authors: Tri Nguyen

Abstract:

Background: Female sex workers (FSWs) are at heightened risk of sexually transmitted infections (STIs) and HIV. The purpose of the research is to study FSWs socio-demographic and behavioral characteristics and their HIV/STIs prevalence in Ba Ria – Vung Tau, Vietnam. Methods: A cross-sectional survey of 420 direct and indirect FSWs between January and May, 2014 from 2 cities and 6 districts in Ba Ria – Vung Tau were collected using sampling methods. FSWs were interviewed using the structured questionnaire and biological samples were tested for syphilis, gonorrhoea, chlamydia and HIV in the study. A database was constructed and entered using Epidata 3.1 software and analyzed using the SPSS 20.0 statistical package. Results: A total of 420 FSWs participated in the survey, with a median age of 27 years within a range of 18 – 43 years. Education levels were low with 33.1% only completing all seven major indicators. The median age of first sexual intercourse of the FSWs was 18 years. Most (88.8%) of the FSWs had oral sex. A high proportion (82.4 %) of the FSWs had consumed alcohol. Among them, 51.2% of the FSWs have had sex after drunk. Of the 177 FSWs have reported that they have had sex after drunk, almost half of direct FSWs have had unprotected sex. Among the 420 FSWs who participated in the survey, 2.6 % (11/420) were found to be HIV positive. There was a difference in HIV prevalence among direct and indirect FSWs (4.8 % and 1.2%, p= 0,022). Nearly 7 percent of the FSWs were infected with Syphilis. Similarly, the percentage (7.9 %) of the FSWs was found with Gonorrhoea and a doubly higher percentage (16.4%) of them was suffering from Chlamydia. On the aggregate, the percentage of FSWs with any one of the STIs (Gonorrhoea or Chlamydia) was highest 21.4 %. Conclusions: FSWs in Ba Ria – Vung Tau, Vietnam are highly vulnerable to STIs and need to enhance the existing STIs treatment and intervention programs in order to reduce the risk of HIV infections. HIV/STIs health education and a 100% condom use program should be implemented, and sex industry should be regulated by law.

Keywords: sex workers, female sex workers, HIV, STIs

Procedia PDF Downloads 238
174 Effect of Iron Ore Tailings on the Properties of Fly-ash Cement Concrete

Authors: Sikiru F. Oritola, Abd Latif Saleh, Abd Rahman Mohd Sam, Rozana Zakaria, Mushairry Mustaffar

Abstract:

The strength of concrete varies with the types of material used; the material used within concrete can also result in different strength due to improper selection of the component. Each material brings a different aspect to the concrete. This work studied the effect of using Iron ore Tailings (IOTs) as partial replacement for sand on some properties of concrete using Fly ash Cement as the binder. The sieve analysis and some other basic properties of the materials used in producing concrete samples were first determined. Two brands of Fly ash Cement were studied. For each brand of Fly ash Cement, five different types of concrete samples denoted as HCT0, HCT10, HCT20, HCT30 and HCT40, for the first brand and PCT0, PCT10, PCT20, PCT30 and PCT40, for the second brand were produced. The percentage of Tailings as partial replacement for sand in the sample was varied from 0% to 40% at 10% interval. For each concrete sample, the average of three cubes, three cylinders and three prism specimen results was used for the determination of the compressive strength, splitting tensile strength and the flexural strength respectively. Water/cement ratio of 0.54 with fly-ash cement content of 463 Kg/m3 was used in preparing the fresh concrete. The slump values for the HCT brand concrete ranges from 152mm – 75mm while that of PCT brand ranges from 149mm to 70mm. The concrete sample PCT30 recorded the highest 28 days compressive strength of 28.12 N/mm2, the highest splitting tensile strength of 2.99 N/mm2 as well as the highest flexural strength of 4.99 N/mm2. The texture of the iron-ore tailings is rough and angular and was therefore able to improve the strength of the fly ash cement concrete. Also, due to the fineness of the IOTs more void in the concrete can be filled, but this reaches the optimum at 30% replacement level, hence the drop in strength at 40% replacement

Keywords: concrete strength, fine aggregate, fly ash cement, iron ore tailings

Procedia PDF Downloads 646
173 Microstructures and Chemical Compositions of Quarry Dust As Alternative Building Material in Malaysia

Authors: Abdul Murad Zainal Abidin, Tuan Suhaimi Salleh, Siti Nor Azila Khalid, Noryati Mustapa

Abstract:

Quarry dust is a quarry end product from rock crushing processes, which is a concentrated material used as an alternative to fine aggregates for concreting purposes. In quarrying activities, the rocks are crushed into aggregates of varying sizes, from 75mm until less than 4.5 mm, the size of which is categorized as quarry dust. The quarry dust is usually considered as waste and not utilized as a recycled aggregate product. The dumping of the quarry dust at the quarry plant poses the risk of environmental pollution and health hazard. Therefore, the research is an attempt to identify the potential of quarry dust as an alternative building material that would reduce the materials and construction costs, as well as contribute effort in mitigating depletion of natural resources. The objectives are to conduct material characterization and evaluate the properties of fresh and hardened engineering brick with quarry dust mix proportion. The microstructures of quarry dust and the bricks were investigated using scanning electron microscopy (SEM), and the results suggest that the shape and surface texture of quarry dust is a combination of hard and angular formation. The chemical composition of the quarry dust was also evaluated using X-ray fluorescence (XRF) and compared against sand and concrete. The quarry dust was found to have a higher presence of alumina (Al₂O₃), indicating the possibility of an early strength effect for brick. They are utilizing quarry dust waste as replacement material has the potential of conserving non-renewable resources as well as providing a viable alternative to disposal of current quarry waste.

Keywords: building materials, cement replacement, quarry microstructure, quarry product, sustainable materials

Procedia PDF Downloads 156
172 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method

Authors: Arwa Alzughaibi

Abstract:

Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.

Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization

Procedia PDF Downloads 235
171 Government Final Consumption Expenditure and Household Consumption Expenditure NPISHS in Nigeria

Authors: Usman A. Usman

Abstract:

Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp (financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.

Keywords: government final consumption expenditure, household consumption expenditure, vector error correction model, cointegration

Procedia PDF Downloads 29
170 Contextual Enablers and Behaviour Outputs for Action of Knowledge Workers

Authors: Juan-Gabriel Cegarra-Navarro, Alexeis Garcia-Perez, Denise Bedford

Abstract:

This paper provides guidelines for what constitutes a knowledge worker. Many graduates from non-managerial domains adopt, at some point in their professional careers, management roles at different levels, ranging from team leaders through to executive leadership. This is particularly relevant for professionals from an engineering background. Moving from a technical to an executive-level requires an understanding of those behaviour management techniques that can motivate and support individuals and their performance. Further, the transition to management also demands a shift of contextual enablers from tangible to intangible resources, which allows individuals to create new capacities, competencies, and capabilities. In this dynamic process, the knowledge worker becomes that key individual who can help members of the management board to transform information into relevant knowledge. However, despite its relevance in shaping the future of the organization in its transition to the knowledge economy, the role of a knowledge worker has not yet been studied to an appropriate level in the current literature. In this study, the authors review both the contextual enablers and behaviour outputs related to the role of the knowledge worker and relate these to their ability to deal with everyday management issues such as knowledge heterogeneity, varying motivations, information overload, or outdated information. This study highlights that the aggregate of capacities, competences and capabilities (CCCs) can be defined as knowledge structures, the study proposes several contextual enablers and behaviour outputs that knowledge workers can use to work cooperatively, acquire, distribute and knowledge. Therefore, this study contributes to a better comprehension of how CCCs can be managed at different levels through their contextual enablers and behaviour outputs.

Keywords: knowledge workers, capabilities, capacities, competences, knowledge structures

Procedia PDF Downloads 133
169 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 53
168 Investigation of the Mechanical Performance of Hot Mix Asphalt Modified with Crushed Waste Glass

Authors: Ayman Othman, Tallat Ali

Abstract:

The successive increase of generated waste materials like glass has led to many environmental problems. Using crushed waste glass in hot mix asphalt paving has been though as an alternative to landfill disposal and recycling. This paper discusses the possibility of utilizing crushed waste glass, as a part of fine aggregate in hot mix asphalt in Egypt. This is done through evaluation of the mechanical properties of asphalt concrete mixtures mixed with waste glass and determining the appropriate glass content that can be adapted in asphalt pavement. Four asphalt concrete mixtures with various glass contents, namely; 0%, 4%, 8% and 12% by weight of total mixture were studied. Evaluation of the mechanical properties includes performing Marshall stability, indirect tensile strength, fracture energy and unconfined compressive strength tests. Laboratory testing had revealed the enhancement in both compressive strength and Marshall stability test parameters when the crushed glass was added to asphalt concrete mixtures. This enhancement was accompanied with a very slight reduction in both indirect tensile strength and fracture energy when glass content up to 8% was used. Adding more than 8% of glass causes a sharp reduction in both indirect tensile strength and fracture energy. Testing results had also shown a reduction in the optimum asphalt content when the waste glass was used. Measurements of the heat loss rate of asphalt concrete mixtures mixed with glass revealed their ability to hold heat longer than conventional mixtures. This can have useful application in asphalt paving during cold whether or when a long period of post-mix transportation is needed.

Keywords: waste glass, hot mix asphalt, mechanical performance, indirect tensile strength, fracture energy, compressive strength

Procedia PDF Downloads 292
167 A Practice Model for Quality Improvement in Concrete Block Mini Plants Based on Merapi Volcanic Sand

Authors: Setya Winarno

Abstract:

Due to abundant Merapi volcanic sand in Yogyakarta City, many local people have utilized it for mass production of concrete blocks through mini plants although their products are low in quality. This paper presents a practice model for quality improvement in this situation in order to supply the current customer interest in good quality of construction material. The method of this research was to investigate a techno economic evaluation through laboratory test and interview. Samples of twenty existing concrete blocks made by local people had only 19.4 kg/cm2 in average compression strength which was lower than the minimum Indonesian standard of 25 kg/cm2. Through repeat testing in laboratory for fulfilling the standard, the concrete mix design of water cement ratio should not be more than 0.64 by weight basis. The proportion of sand as aggregate content should not be more than 9 parts to 1 part by volume of Portland cement. Considering the production cost, the basic price was Rp 1,820 for each concrete block, comparing to Rp 2,000 as a normal competitive market price. At last, the model describes (a) maximum water cement ratio is 0.64, (b) maximum proportion of sand and cement is 1:9, (c) the basic price is about Rp. 1,820.00 and (d) strategies to win the competitive market on mass production of concrete blocks are focus in quality, building relationships with consumer, rapid respond to customer need, continuous innovation by product diversification, promotion in social media, and strict financial management.

Keywords: concrete block, good quality, improvement model, diversification

Procedia PDF Downloads 498
166 A Molecular-Level Study of Combining the Waste Polymer and High-Concentration Waste Cooking Oil as an Additive on Reclamation of Aged Asphalt Pavement

Authors: Qiuhao Chang, Liangliang Huang, Xingru Wu

Abstract:

In the United States, over 90% of the roads are paved with asphalt. The aging of asphalt is the most serious problem that causes the deterioration of asphalt pavement. Waste cooking oils (WCOs) have been found they can restore the properties of aged asphalt and promote the reuse of aged asphalt pavement. In our previous study, it was found the optimal WCO concentration to restore the aged asphalt sample should be in the range of 10~15 wt% of the aged asphalt sample. After the WCO concentration exceeds 15 wt%, as the WCO concentration increases, some important properties of the asphalt sample can be weakened by the addition of WCO, such as cohesion energy density, surface free energy density, bulk modulus, shear modulus, etc. However, maximizing the utilization of WCO can create environmental and economic benefits. Therefore, in this study, a new idea about using the waste polymer is another additive to restore the WCO modified asphalt that contains a high concentration of WCO (15-25 wt%) is proposed, which has never been reported before. In this way, both waste polymer and WCO can be utilized. The molecular dynamics simulation is used to study the effect of waste polymer on properties of WCO modified asphalt and understand the corresponding mechanism at the molecular level. The radial distribution function, self-diffusion, cohesion energy density, surface free energy density, bulk modulus, shear modulus, adhesion energy between asphalt and aggregate are analyzed to validate the feasibility of combining the waste polymer and WCO to restore the aged asphalt. Finally, the optimal concentration of waste polymer and WCO are determined.

Keywords: reclaim aged asphalt pavement, waste cooking oil, waste polymer, molecular dynamics simulation

Procedia PDF Downloads 187
165 Coronin 1C and miR-128A as Potential Diagnostic Biomarkers for Glioblastoma Multiform

Authors: Denis Mustafov, Emmanouil Karteris, Maria Braoudaki

Abstract:

Glioblastoma multiform (GBM) is a heterogenous primary brain tumour that kills most affected patients. To the authors best knowledge, despite all research efforts there is no early diagnostic biomarker for GBM. MicroRNAs (miRNAs) are short non-coding RNA molecules which are deregulated in many cancers. The aim of this research was to determine miRNAs with a diagnostic impact and to potentially identify promising therapeutic targets for glioblastoma multiform. In silico analysis was performed to identify deregulated miRNAs with diagnostic relevance for glioblastoma. The expression profiles of the chosen miRNAs were then validated in vitro in the human glioblastoma cell lines A172 and U-87MG. Briefly, RNA extraction was carried out using the Trizol method, whilst miRNA extraction was performed using the mirVANA miRNA isolation kit. Quantitative Real-Time Polymerase Chain Reaction was performed to verify their expression. The presence of five target proteins within the A172 cell line was evaluated by Western blotting. The expression of the CORO1C protein within 32 GBM cases was examined via immunohistochemistry. The miRNAs identified in silico included miR-21-5p, miR-34a and miR-128a. These miRNAs were shown to target deregulated GBM genes, such as CDK6, E2F3, BMI1, JAG1, and CORO1C. miR-34a and miR-128a showed low expression profiles in comparison to a control miR-RNU-44 in both GBM cell lines suggesting tumour suppressor properties. Opposing, miR-21-5p demonstrated greater expression indicating that it could potentially function as an oncomiR. Western blotting revealed expression of all five proteins within the A172 cell line. In silico analysis also suggested that CORO1C is a target of miR-128a and miR-34a. Immunohistochemistry demonstrated that 75% of the GBM cases showed moderate to high expression of CORO1C protein. Greater understanding of the deregulated expression of miR-128a and the upregulation of CORO1C in GBM could potentially lead to the identification of a promising diagnostic biomarker signature for glioblastomas.

Keywords: non-coding RNAs, gene expression, brain tumours, immunohistochemistry

Procedia PDF Downloads 62
164 Government Final Consumption Expenditure Financial Deepening and Household Consumption Expenditure NPISHs in Nigeria

Authors: Usman A. Usman

Abstract:

Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.

Keywords: household, government expenditures, vector error correction model, johansen test

Procedia PDF Downloads 36
163 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling

Authors: Erfan Niazi, Marianne Fenech

Abstract:

Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.

Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling

Procedia PDF Downloads 333
162 Policy Effectiveness in the Situation of Economic Recession

Authors: S. K. Ashiquer Rahman

Abstract:

The proper policy handling might not able to attain the target since some of recessions, e.g., pandemic-led crises, the variables shocks of the economics. At the level of this situation, the Central bank implements the monetary policy to choose increase the exogenous expenditure and level of money supply consecutively for booster level economic growth, whether the monetary policy is relatively more effective than fiscal policy in altering real output growth of a country or both stand for relatively effective in the direction of output growth of a country. The dispute with reference to the relationship between the monetary policy and fiscal policy is centered on the inflationary penalty of the shortfall financing by the fiscal authority. The latest variables socks of economics as well as the pandemic-led crises, central banks around the world predicted just about a general dilemma in relation to increase rates to face the or decrease rates to sustain the economic movement. Whether the prices hang about fundamentally unaffected, the aggregate demand has also been hold a significantly negative attitude by the outbreak COVID-19 pandemic. To empirically investigate the effects of economics shocks associated COVID-19 pandemic, the paper considers the effectiveness of the monetary policy and fiscal policy that linked to the adjustment mechanism of different economic variables. To examine the effects of economics shock associated COVID-19 pandemic towards the effectiveness of Monetary Policy and Fiscal Policy in the direction of output growth of a Country, this paper uses the Simultaneous equations model under the estimation of Two-Stage Least Squares (2SLS) and Ordinary Least Squares (OLS) Method.

Keywords: IS-LM framework, pandemic. Economics variables shocks, simultaneous equations model, output growth

Procedia PDF Downloads 64