Search results for: small states
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7567

Search results for: small states

1777 Determinants of Budget Performance in an Oil-Based Economy

Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi

Abstract:

Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.

Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue

Procedia PDF Downloads 172
1776 Power-Sharing Politics: A Panacea to Conflict Resolution and Stability in Africa

Authors: Emmanuel Dangana Monday

Abstract:

Africa as a continent has been ravaged and bedeviled by series of political conflicts associated with politics and power-sharing maneuvering. As a result it has become the most unstable continent in the world in terms of power distribution and stable political culture. This paper examines the efficacy of conscious and deliberate power-sharing strategies to settle or resolve political conflicts in Africa in the arrangements of creation of states, revenue and resources allocation, and office distribution systems. The study is concerned with the spatial impact of conflicts generated in some renowned African countries in which power-sharing would have been a solution. Ethno-regional elite groups are identified as the major actors in the struggles for the distribution of territorial, economic and political powers in Africa. The struggle for power has become so intense that it has degenerated to conflicts and wars of inter and intra-political classes and parties respectively. Secondary data and deductive techniques were used in data collection and analysis. It is discovered that power-sharing has become an indispensable tool to curb the incessant political and power crisis in Africa. It is one of the finest tolerable modality of mediating elite’ competition, since it reflects the interests of both the dominant and the perceived marginalized groups. The study recommends that countries and regions of political, ethnic and religious differences in Africa should employed power-sharing strategy in order to avoid unnecessary political tension and resultant crisis. Interest groups should always come to the negotiation table to reach a realistic, durable and expected compromise to secure a peacefully resolute Africa.

Keywords: Africa, power-sharing, conflicts, politics and political stability

Procedia PDF Downloads 325
1775 Monitoring Potential Temblor Localities as a Supplemental Risk Control System

Authors: Mikhail Zimin, Svetlana Zimina, Maxim Zimin

Abstract:

Without question, the basic method of prevention of human and material losses is the provision for adequate strength of constructions. At the same time, seismic load has a stochastic character. So, at all times, there is little danger of earthquake forces exceeding the selected design load. This risk is very low, but the consequences of such events may be extremely serious. Very dangerous are also occasional mistakes in seismic zoning, soil conditions changing before temblors, and failure to take into account hazardous natural phenomena caused by earthquakes. Besides, it is known that temblors detrimentally affect the environmental situation in regions where they occur, resulting in panic and worsening various disease courses. It may lead to mistakes of personnel of hazardous production facilities like the production and distribution of gas and oil, which may provoke severe accidents. In addition, gas and oil pipelines often have long mileage and cross many perilous zones by contrast with buildings. This situation increases the risk of heavy accidents. In such cases, complex monitoring of potential earthquake localities would be relevant. Even though the number of successful real-time forecasts of earthquakes is not great, it is well in excess, such as may be under random guessing. Experimental performed time-lapse study and analysis consist of searching seismic, biological, meteorological, and light earthquake precursors, processing such data with the help of fuzzy sets, collecting weather information, utilizing a database of terrain, and computing risk of slope processes under the temblor in a given setting. Works were done in a real-time environment and broadly acceptable results took place. Observations from already in-place seismic recording systems are used. Furthermore, a look back study of precursors of known earthquakes is done. Situations before Ashkhabad, Tashkent, and Haicheng seismic events are analyzed. Fairish findings are obtained. Results of earthquake forecasts can be used for predicting dangerous natural phenomena caused by temblors such as avalanches and mudslides. They may also be utilized for prophylaxis of some diseases and their complications. Relevant software is worked out too. It should be emphasized that such control does not require serious financial expenses and can be performed by a small group of professionals. Thus, complex monitoring of potential earthquake localities, including short-term earthquake forecasts and analysis of possible hazardous consequences of temblors, may further the safety of pipeline facilities.

Keywords: risk, earthquake, monitoring, forecast, precursor

Procedia PDF Downloads 22
1774 Epigenetic and Archeology: A Quest to Re-Read Humanity

Authors: Salma A. Mahmoud

Abstract:

Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.

Keywords: epigenetics, archeology, bones, chromatin, methylome

Procedia PDF Downloads 108
1773 Semi-Autonomous Surgical Robot for Pedicle Screw Insertion on ex vivo Bovine Bone: Improved Workflow and Real-Time Process Monitoring

Authors: Robnier Reyes, Andrew J. P. Marques, Joel Ramjist, Chris R. Pasarikovski, Victor X. D. Yang

Abstract:

Over the past three decades, surgical robotic systems have demonstrated their ability to improve surgical outcomes. The LBR Med is a collaborative robotic arm that is meant to work with a surgeon to streamline surgical workflow. It has 7 degrees of freedom and thus can be easily oriented. Position and torque sensors at each joint allow it to maintain a position accuracy of 150 µm with real-time force and torque feedback, making it ideal for complex surgical procedures. Spinal fusion procedures involve the placement of as many as 20 pedicle screws, requiring a great deal of accuracy due to proximity to the spinal canal and surrounding vessels. Any deviation from intended path can lead to major surgical complications. Assistive surgical robotic systems are meant to serve as collaborative devices easing the workload of the surgeon, thereby improving pedicle screw placement by mitigating fatigue related inaccuracies. Moreover, robotic spinal systems have shown marked improvements over conventional freehanded techniques in both screw placement accuracy and fusion quality and have greatly reduced the need for screw revision, intraoperatively and post-operatively. However, current assistive spinal fusion robots, such as the ROSA Spine, are limited in functionality to positioning surgical instruments. While they offer a small degree of improvement in pedicle screw placement accuracy, they do not alleviate surgeon fatigue, nor do they provide real-time force and torque feedback during screw insertion. We propose a semi-autonomous surgical robot workflow for spinal fusion where the surgeon guides the robot to its initial position and orientation, and the robot drives the pedicle screw accurately into the vertebra. Here, we demonstrate feasibility by inserting pedicle screws into ex-vivo bovine rib bone. The robot monitors position, force and torque with respect to predefined values selected by the surgeon to ensure the highest possible spinal fusion quality. The workflow alleviates the strain on the surgeon by having the robot perform the screw placement while the ability to monitor the process in real-time keeps the surgeon in the system loop. The approach we have taken in terms of level autonomy for the robot reflects its ability to safely collaborate with the surgeon in the operating room without external navigation systems.

Keywords: ex vivo bovine bone, pedicle screw, surgical robot, surgical workflow

Procedia PDF Downloads 168
1772 Formation of an Empire in the 21st Century: Theoretical Approach in International Relations and a Worldview of the New World Order

Authors: Rami Georg Johann

Abstract:

Against the background of the current geopolitical constellations, the author looks at various empire models, which are discussed and compared with each other with regard to their stability and functioning. The focus is on the fifth concept as a possible new world order in the 21st century. These will be discussed and compared to one another according to their stability and functioning. All empires to be designed will be conceptualised based on one, two, three, four, and five worlds. All worlds are made up of a different constellation of states and relating coalitions. All systems will be discussed in detail. The one-world-system, the“Western Empire,” will be presented as a possible solution to a new world order in the 21st century (fifth concept). The term “Western” in “Western Empire” describes the Western concept after World War II. This Western concept was the result of two horrible world wars in the 20th century.” With this in mind, the fifth concept forms a stable empire system, the “Western Empire,” by political measures tied to two issues. Thus, this world order provides a significantly higher long-term stability in contrast to all other empire models (comprising five, four, three, or two worlds). Confrontations and threats of war are reduced to a minimum. The two issues mentioned are “merger” and “competition.” These are the main differences in forming an empire compared to all empires and realms in the history of mankind. The fifth concept of this theory, the “Western Empire,” acts explicitly as a counter model. The Western Empire (fifth concept) is formed by the merger of world powers without war. Thus, a world order without competition is created. This merged entity secures long-term peace, stability, democratic values, freedom, human rights, equality, and justice in the new world order.

Keywords: empire formation, theory of international relations, Western Empire, world order

Procedia PDF Downloads 150
1771 Brand Resonance Strategy For Long-term Market Survival: Does The Brand Resonance Matter For Smes? An Investigation In Smes Digital Branding (Facebook, Twitter, Instagram And Blog) Activities And Strong Brand Development

Authors: Noor Hasmini Abd Ghani

Abstract:

Brand resonance is among of new focused strategy that getting more attention in nowadays by larger companies for their long-term market survival. The brand resonance emphasizing of two main characteristics that are intensity and activity able to generate psychology bond and enduring relationship between a brand and consumer. This strong attachment relationship has represented brand resonance with the concept of consumer brand relationship (CBR) that exhibit competitive advantage for long-term market survival. The main consideration toward this brand resonance approach is not only in the context of larger companies but also can be adapted in Small and Medium Enterprises (SMEs) as well. The SMEs have been recognized as vital pillar to the world economy in both developed and emergence countries are undeniable due to their economic growth contributions, such as opportunity for employment, wealth creation, and poverty reduction. In particular, the facts that SMEs in Malaysia are pivotal to the well-being of the Malaysian economy and society are clearly justified, where the SMEs competent in provided jobs to 66% of the workforce and contributed 40% to the GDP. As regards to it several sectors, the SMEs service category that covers the Food & Beverage (F&B) sector is one of the high-potential industries in Malaysia. For that reasons, SMEs strong brand or brand equity is vital to be developed for their long-term market survival. However, there’s still less appropriate strategies in develop their brand equity. The difficulties have never been so evident until Covid-19 swept across the globe from 2020. Since the pandemic began, more than 150,000 SMEs in Malaysia have shut down, leaving more than 1.2 million people jobless. Otherwise, as the SMEs are the pillar of any economy for the countries in the world, and with negative effect of COVID-19 toward their economic growth, thus, their protection has become important more than ever. Therefore, focusing on strategy that able to develop SMEs strong brand is compulsory. Hence, this is where the strategy of brand resonance is introduced in this study. Mainly, this study aims to investigate the impact of CBR as a predictor and mediator in the context of social media marketing (SMM) activities toward SMEs e-brand equity (or strong brand) building. The study employed the quantitative research design concerning on electronic survey method with the valid response rate of 300 respondents. Interestingly, the result revealed the importance role of CBR either as predictor or mediator in the context of SMEs SMM as well as brand equity development. Further, the study provided several theoretical and practical implications that can benefit the SMEs in enhancing their strategic marketing decision.

Keywords: SME brand equity, SME social media marketing, SME consumer brand relationship, SME brand resonance

Procedia PDF Downloads 60
1770 Identifying Areas on the Pavement Where Rain Water Runoff Affects Motorcycle Behavior

Authors: Panagiotis Lemonakis, Theodoros Αlimonakis, George Kaliabetsos, Nikos Eliou

Abstract:

It is very well known that certain vertical and longitudinal slopes have to be assured in order to achieve adequate rainwater runoff from the pavement. The selection of longitudinal slopes, between the turning points of the vertical curves that meet the afore-mentioned requirement does not ensure adequate drainage because the same condition must also be applied at the transition curves. In this way none of the pavement edges’ slopes (as well as any other spot that lie on the pavement) will be opposite to the longitudinal slope of the rotation axis. Horizontal and vertical alignment must be properly combined in order to form a road which resultant slope does not take small values and hence, checks must be performed in every cross section and every chainage of the road. The present research investigates the rain water runoff from the road surface in order to identify the conditions under which, areas of inadequate drainage are being created, to analyze the rainwater behavior in such areas, to provide design examples of good and bad drainage zones and to track down certain motorcycle types which might encounter hazardous situations due to the presence of water film between the pavement and both of their tires resulting loss of traction. Moreover, it investigates the combination of longitudinal and cross slope values in critical pavement areas. It should be pointed out that the drainage gradient is analytically calculated for the whole road width and not just for an oblique slope per chainage (combination of longitudinal grade and cross slope). Lastly, various combinations of horizontal and vertical design are presented, indicating the crucial zones of bad pavement drainage. The key conclusion of the study is that any type of motorcycle will travel for some time inside the area of improper runoff for a certain time frame which depends on the speed and the trajectory that the rider chooses along the transition curve. Taking into account that on this section the rider will have to lean his motorcycle and hence reduce the contact area of his tire with the pavement it is apparent that any variations on the friction value due to the presence of a water film may lead to serious problems regarding his safety. The water runoff from the road pavement is improved when between reverse longitudinal slopes, crest instead of sag curve is chosen and particularly when its edges coincide with the edges of the horizontal curve. Lastly, the results of the investigation have shown that the variation of the longitudinal slope involves the vertical shift of the center of the poor water runoff area. The magnitude of this area increases as the length of the transition curve increases.

Keywords: drainage, motorcycle safety, superelevation, transition curves, vertical grade

Procedia PDF Downloads 100
1769 Stigma Associated with Invisible Disabilities and Its Effect on Intended Disclosure in the Workplace

Authors: Jessica Lynne Hicksted

Abstract:

Disability discrimination is a long-standing issue that, despite protections, continues to result in unemployment, underemployment, and lack of advancement for disabled persons. Visible stigma is researched substantially; however, less is known about the impact of stigma associated with identities that can be concealed. Although researchers have investigated this issue, currently there is no tool to measure this phenomenon. The purpose of this quantitative study was to create and validate a new tool to measure stigma associated with invisible disabilities. The study is grounded by Roberts’ conceptual model of professional image construction integrating social identity, impression management, and organizational behavior; Meisenbach’s stigma management communication theory addressing the vulnerabilities and resilience to stigma communication by focusing on how individuals encounter and react to perceived stigmas; and Kelley and Michela’s causal attribution theory. Participants included 1,412 adults in the United States 18 years or older currently employed or who have been employed within the last 5 years. Confirmatory factor analysis of the new Workplace Invisible Disabilities Experience scale showed excellent fit of the factor structure to the data, X₂/df = 1.855, CFI = .955, RMSEA = .045, p = .0001. The scale has three subscales, Ableism, Advocacy, and Acceptance, with excellent internal consistency reliability. Total score, Advocacy, and Acceptance were associated with intention to disclose. Implications for positive social change include helping organizations to understand the extent of invisible disability stigma that can help improve workplace performance and satisfaction.

Keywords: invisible disabilities, accommodations, acceptance, social change, workplace inclusion

Procedia PDF Downloads 70
1768 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India

Authors: Sayantan Khanra, Rojers P. Joseph

Abstract:

Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.

Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions

Procedia PDF Downloads 267
1767 Highly Efficient Iron Oxide-Sulfonated Graphene Oxide Catalyst for Esterification and Trans-Esterification Reactions

Authors: Reena D. Souza, Tripti Vats, Prem F. Siril

Abstract:

Esterification of free fatty acid (oleic acid) and transesterification of waste cooking oil (WCO) with ethanol over graphene oxide (GO), GO-Fe2O3, sulfonated GO (GO-SO3H), and Fe2O3/GO-SO3H catalysts were examined in the present study. Iron oxide supported graphene-based acid catalyst (Fe2O3/GO-SO3H) exhibited highest catalytic activity. GO was prepared by modified Hummer’s process. The GO-Fe2O3 nanocomposites were prepared by the addition of NaOH to a solution containing GO and FeCl3. Sulfonation was done using concentrated sulfuric acid. Transmissionelectron microscopy (TEM) and atomic force microscopy (AFM) imaging revealed the presence of Fe2O3 particles having size in the range of 50-200 nm. Crystal structure was analyzed by XRD and defect states of graphene were characterized using Raman spectroscopy. The effects of the reaction variables such as catalyst loading, ethanol to acid ratio, reaction time and temperature on the conversion of fatty acids were studied. The optimum conditions for the esterification process were molar ratio of alcohol to oleic acid at 12:1 with 5 wt% of Fe2O3/GO-SO3H at 1000C with a reaction time of 4h yielding 99% of ethyl oleate. This is because metal oxide supported solid acid catalysts have advantages of having both strong Brønsted as well as Lewis acid properties. The biodiesel obtained by transesterification of WCO was characterized by 1H NMR and Gas Chromatography techniques. XRD patterns of the recycled catalyst evidenced that the catalyst structure was unchanged up to the 5th cycle, which indicated the long life of the catalyst.

Keywords: Fe₂O₃/GO-SO₃H, Graphene Oxide, GO-Fe₂O₃, GO-SO₃H, WCO

Procedia PDF Downloads 277
1766 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 108
1765 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 184
1764 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort

Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri

Abstract:

Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.

Keywords: early treatment, fluency, preschool children, stuttering

Procedia PDF Downloads 216
1763 A Model for a Continuous Professional Development Program for Early Childhood Teachers in Villages: Insights from the Coaching Pilot in Indonesia

Authors: Ellen Patricia, Marilou Hyson

Abstract:

Coaching has been showing great potential to strengthen the impact of brief group trainings and help early childhood teachers solve specific problems at work with the goal of raising the quality of early childhood services. However, there have been some doubts about the benefits that village teachers can receive from coaching. It is perceived that village teachers may struggle with the thinking skills needed to make coaching beneficial. Furthermore, there are reservations about whether principals and supervisors in villages are open to coaching’s facilitative approach, as opposed to the directive approach they have been using. As such, the use of coaching to develop the professionalism of early childhood teachers in the villages needs to be examined. The Coaching Pilot for early childhood teachers in Indonesia villages provides insights for the above issues. The Coaching Pilot is part of the ECED Frontline Pilot, which is a collaboration project between the Government of Indonesia and the World Bank with the support from the Australian Government (DFAT). The Pilot started with coordinated efforts with the local government in two districts to select principals and supervisors who have been equipped with basic knowledge about early childhood education to take part in 2-days coaching training. Afterwards, the participants were asked to collect 25 hours of coaching early childhood teachers who have participated in the Enhanced Basic Training for village teachers. The participants who completed this requirement were then invited to come for an assessment of their coaching skills. Following that, a qualitative evaluation was conducted using in-depth interviews and Focus Group Discussion techniques. The evaluation focuses on the impact of the coaching pilot in helping the village teachers to develop in their professionalism, as well as on the sustainability of the intervention. Results from the evaluation indicated that although their low education may limit their thinking skills, village teachers benefited from the coaching that they received. Moreover, the evaluation results also suggested that with enough training and support, principals and supervisors in the villages were able to provide an adequate coaching service for the teachers. On top of that, beyond this small start, interest is growing, both within the pilot districts and even beyond, due to word of mouth of the benefits that the Coaching Pilot has created. The districts where coaching was piloted have planned to continue the coaching program, since a number of early childhood teachers have requested to be coached, and a number of principals and supervisors have also requested to be trained as a coach. Furthermore, the Association for Early Childhood Educators in Indonesia has started to adopt coaching into their program. Although further research is needed, the Coaching Pilot suggests that coaching can positively impact early childhood teachers in villages, and village principals and supervisors can become a promising source of future coaches. As such, coaching has a significant potential to become a sustainable model for a continuous professional development program for early childhood teachers in villages.

Keywords: coaching, coaching pilot, early childhood teachers, principals and supervisors, village teachers

Procedia PDF Downloads 240
1762 Association between Maternal Personality and Postnatal Mother-to-Infant Bonding

Authors: Tessa Sellis, Marike A. Wierda, Elke Tichelman, Mirjam T. Van Lohuizen, Marjolein Berger, François Schellevis, Claudi Bockting, Lilian Peters, Huib Burger

Abstract:

Introduction: Most women develop a healthy bond with their children, however, adequate mother-to-infant bonding cannot be taken for granted. Mother-to-infant bonding refers to the feelings and emotions experienced by the mother towards her child. It is an ongoing process that starts during pregnancy and develops during the first year postpartum and likely throughout early childhood. The prevalence of inadequate bonding ranges from 7 to 11% in the first weeks postpartum. An impaired mother-to-infant bond can cause long-term complications for both mother and child. Very little research has been conducted on the direct relationship between the personality of the mother and mother-to-infant bonding. This study explores the associations between maternal personality and postnatal mother-to-infant bonding. The main hypothesis is that there is a relationship between neuroticism and mother-to-infant bonding. Methods: Data for this study were used from the Pregnancy Anxiety and Depression Study (2010-2014), which examined symptoms of and risk factors for anxiety or depression during pregnancy and the first year postpartum of 6220 pregnant women who received primary, secondary or tertiary care in the Netherlands. The study was expanded in 2015 to investigate postnatal mother-to-infant bonding. For the current research 3836 participants were included. During the first trimester of gestation, baseline characteristics, as well as personality, were measured through online questionnaires. Personality was measured by the NEO Five Factor Inventory (NEO-FFI), which covers the big five of personality (neuroticism, extraversion, openness, altruism and conscientiousness). Mother-to-infant bonding was measured postpartum by the Postpartum Bonding Questionnaire (PBQ). Univariate linear regression analysis was performed to estimate the associations. Results: 5% of the PBQ-respondents reported impaired bonding. A statistically significant association was found between neuroticism and mother-to-infant bonding (p < .001): mothers scoring higher on neuroticism, reported a lower score on mother-to-infant bonding. In addition, a positive correlation was found between the personality traits extraversion (b: -.081), openness (b: -.014), altruism (b: -.067), conscientiousness (b: -.060) and mother-to-infant bonding. Discussion: This study is one of the first to demonstrate a direct association between the personality of the mother and mother-to-infant bonding. A statistically significant relationship has been found between neuroticism and mother-to-infant bonding, however, the percentage of variance predictable by a personality dimension is very small. This study has examined one part of the multi-factorial topic of mother-to-infant bonding and offers more insight into the rarely investigated and complex matter of mother-to-infant bonding. For midwives, it is important recognize the risks for impaired bonding and subsequently improve policy for women at risk.

Keywords: mother-to-infant bonding, personality, postpartum, pregnancy

Procedia PDF Downloads 364
1761 Analysis of Road Accidents in India 2016 to 2021

Authors: Ajin Frank J., Shridevi Jeevan Kamble

Abstract:

The primary objective of this research paper is to identify significant patterns and insights in road accident data in India spanning from 2016 to 2021. The study reveals that the frequency of accidents, injuries, and fatalities varies depending on numerous factors such as the type of vehicle, time of accidents, age of the vehicle, age and gender of the driver, among others. Notably, the COVID-19 pandemic and subsequent lockdown measures have significantly impacted these figures. One of the key findings of the analysis is the rise in the number of accidents and deaths involving two-wheeler vehicles, particularly among younger individuals, in major states across India. This trend is of concern, and there is a need for increased awareness and precautions to prevent these types of accidents. Additionally, with the imminent rise of electric vehicles in the coming years, ensuring their safety on the road is a critical matter. Another significant factor contributing to road accidents is the age of vehicles. As vehicles age, their handling becomes more challenging compared to new ones, increasing the risk of accidents. Thus, it is imperative for the government to impose stringent regulations and laws to reduce these accident-causing factors and raise awareness among individuals about taking necessary precautions to avoid accidents. This study highlights the importance of understanding the underlying patterns and factors contributing to road accidents in India. Through this knowledge, policymakers and stakeholders can develop effective strategies to address these challenges and promote road safety, ultimately reducing the number of accidents, injuries, and fatalities on Indian roads.

Keywords: road accidents, India, road safety, accident deaths

Procedia PDF Downloads 84
1760 Combination of Silver-Curcumin Nanoparticle for the Treatment of Root Canal Infection

Authors: M. Gowri, E. K. Girija, V. Ganesh

Abstract:

Background and Significance: Among the dental infections, inflammation and infection of the root canal are common among all age groups. Currently, the management of root canal infections involves cleaning the canal with powerful irrigants followed by intracanal medicament application. Though these treatments have been in vogue for a long time, root canal failures do occur. Treatment for root canal infections is limited due to the anatomical complexity in terms of small micrometer volumes and poor penetration of drugs. Thus, infections of the root canal seem to be a challenge that demands development of new agents that can eradicate C. albicans. Methodology: In the present study, we synthesized and screened silver-curcumin nanoparticle against Candida albicans. Detailed molecular studies were carried out with silver-curcumin nanoparticle on C. albicans pathogenicity. Morphological cell damage and antibiofilm activity of silver-curcumin nanoparticle on C. albicans was studied using scanning electron microscopy (SEM). Biochemical evidence for membrane damage was studied using flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model. Results: Screening data showed that silver-curcumin nanoparticle was active against C. albicans. Silver-curcumin nanoparticle exerted time kill effect and post antifungal effect. When used in combination with fluconazole or nystatin, silver-curcumin nanoparticle revealed a minimum inhibitory concentration (MIC) decrease for both drugs used. In-depth molecular studies with silver-curcumin nanoparticle on C. albicans showed that silver-curcumin nanoparticle inhibited yeast to hyphae (Y-H) conversion. Further, SEM images of C. albicans showed that silver-curcumin nanoparticle caused membrane damage and inhibited biofilm formation. Biochemical evidence for membrane damage was confirmed by increased propidium iodide (PI) uptake in flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model, which mimics human tooth root canal infection. Confocal laser scanning microscopy studies showed eradication of C. albicans and reduction in colony forming unit (CFU) after 24 h treatment in the infected tooth samples in this model. Conclusion: The results of this study can pave the way for developing new antifungal agents with well deciphered mechanisms of action and can be a promising antifungal agent or medicament against root canal infection.

Keywords: C. albicans, ex vivo dentine model, inhibition of biofilm formation, root canal infection, yeast to hyphae conversion inhibition

Procedia PDF Downloads 208
1759 Assessment of the Efficacy of Routine Medical Tests in Screening Medical Radiation Staff in Shiraz University of Medical Sciences Educational Centers

Authors: Z. Razi, S. M. J. Mortazavi, N. Shokrpour, Z. Shayan, F. Amiri

Abstract:

Long-term exposure to low doses of ionizing radiation occurs in radiation health care workplaces. Although doses in health professions are generally very low, there are still matters of concern. The radiation safety program promotes occupational radiation safety through accurate and reliable monitoring of radiation workers in order to effectively manage radiation protection. To achieve this goal, it has become mandatory to implement health examination periodically. As a result, based on the hematological alterations, working populations with a common occupational radiation history are screened. This paper calls into question the effectiveness of blood component analysis as a screening program which is mandatory for medical radiation workers in some countries. This study details the distribution and trends of changes in blood components, including white blood cells (WBCs), red blood cells (RBCs) and platelets as well as received cumulative doses from occupational radiation exposure. This study was conducted among 199 participants and 100 control subjects at the medical imaging departments at the central hospital of Shiraz University of Medical Sciences during the years 2006–2010. Descriptive and analytical statistics, considering the P-value<0.05 as statistically significance was used for data analysis. The results of this study show that there is no significant difference between the radiation workers and controls regarding WBCs and platelet count during 4 years. Also, we have found no statistically significant difference between the two groups with respect to RBCs. Besides, no statistically significant difference was observed with respect to RBCs with regards to gender, which has been analyzed separately because of the lower reference range for normal RBCs levels in women compared to men and. Moreover, the findings confirm that in a separate evaluation between WBCs count and the personnel’s working experience and their annual exposure dose, results showed no linear correlation between the three variables. Since the hematological findings were within the range of control levels, it can be concluded that the radiation dosage (which was not more than 7.58 mSv in this study) had been too small to stimulate any quantifiable change in medical radiation worker’s blood count. Thus, use of more accurate method for screening program based on the working profile of the radiation workers and their accumulated dose is suggested. In addition, complexity of radiation-induced functions and the influence of various factors on blood count alteration should be taken into account.

Keywords: blood cell count, mandatory testing, occupational exposure, radiation

Procedia PDF Downloads 461
1758 Tracing the Developmental Repertoire of the Progressive: Evidence from L2 Construction Learning

Authors: Tianqi Wu, Min Wang

Abstract:

Research investigating language acquisition from a constructionist perspective has demonstrated that language is learned as constructions at various linguistic levels, which is related to factors of frequency, semantic prototypicality, and form-meaning contingency. However, previous research on construction learning tended to focus on clause-level constructions such as verb argument constructions but few attempts were made to study morpheme-level constructions such as the progressive construction, which is regarded as a source of acquisition problems for English learners from diverse L1 backgrounds, especially for those whose L1 do not have an equivalent construction such as German and Chinese. To trace the developmental trajectory of Chinese EFL learners’ use of the progressive with respect to verb frequency, verb-progressive contingency, and verbal prototypicality and generality, a learner corpus consisting of three sub-corpora representing three different English proficiency levels was extracted from the Chinese Learners of English Corpora (CLEC). As the reference point, a native speakers’ corpus extracted from the Louvain Corpus of Native English Essays was also established. All the texts were annotated with C7 tagset by part-of-speech tagging software. After annotation all valid progressive hits were retrieved with AntConc 3.4.3 followed by a manual check. Frequency-related data showed that from the lowest to the highest proficiency level, (1) the type token ratio increased steadily from 23.5% to 35.6%, getting closer to 36.4% in the native speakers’ corpus, indicating a wider use of verbs in the progressive; (2) the normalized entropy value rose from 0.776 to 0.876, working towards the target score of 0.886 in native speakers’ corpus, revealing that upper-intermediate learners exhibited a more even distribution and more productive use of verbs in the progressive; (3) activity verbs (i.e., verbs with prototypical progressive meanings like running and singing) dropped from 59% to 34% but non-prototypical verbs such as state verbs (e.g., being and living) and achievement verbs (e.g., dying and finishing) were increasingly used in the progressive. Apart from raw frequency analyses, collostructional analyses were conducted to quantify verb-progressive contingency and to determine what verbs were distinctively associated with the progressive construction. Results were in line with raw frequency findings, which showed that contingency between the progressive and non-prototypical verbs represented by light verbs (e.g., going, doing, making, and coming) increased as English proficiency proceeded. These findings altogether suggested that beginning Chinese EFL learners were less productive in using the progressive construction: they were constrained by a small set of verbs which had concrete and typical progressive meanings (e.g., the activity verbs). But with English proficiency increasing, their use of the progressive began to spread to marginal members such as the light verbs.

Keywords: Construction learning, Corpus-based, Progressives, Prototype

Procedia PDF Downloads 128
1757 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 230
1756 Spatial Temporal Change of COVID-19 Vaccination Condition in the US: An Exploration Based on Space Time Cube

Authors: Yue Hao

Abstract:

COVID-19 vaccines not only protect individuals but society as a whole. In this case, having an understanding of the change and trend of vaccination conditions may shed some light on revising and making up-to-date policies regarding large-scale public health promotions and calls in order to lead and encourage the adoption of COVID-19 vaccines. However, vaccination status change over time and vary from place to place hidden patterns that were not fully explored in previous research. In our research, we took advantage of the spatial-temporal analytical methods in the domain of geographic information science and captured the spatial-temporal changes regarding COVID-19 vaccination status in the United States during 2020 and 2021. After conducting the emerging hot spots analysis on both the state level data of the US and county level data of California we found that: (1) at the macroscopic level, there is a continuously increasing trend of the vaccination rate in the US, but there is a variance on the spatial clusters at county level; (2) spatial hotspots and clusters with high vaccination amount over time were clustered around the west and east coast in regions like California and New York City where are densely populated with considerable economy conditions; (3) in terms of the growing trend of the daily vaccination among, Los Angeles County alone has very high statistics and dramatic increases over time. We hope that our findings can be valuable guidance for supporting future decision-making regarding vaccination policies as well as directing new research on relevant topics.

Keywords: COVID-19 vaccine, GIS, space time cube, spatial-temporal analysis

Procedia PDF Downloads 79
1755 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 305
1754 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 203
1753 Impact of Extension Services Pastoralists’ Vulnerability to Climate Change in Northern Guinea Savannah of Nigeria

Authors: Sidiqat A. Aderinoye-Abdulwahab, Lateef L. Adefalu, Jubril O. Animashaun

Abstract:

Pastoralists in Nigeria are situated in dry regions - where water and pasture for livestock are particularly scarce, as well as areas with poor availability of social amenities and infrastructure. This study therefore explored how extension service could be used to reduce the exposure of nomads to effects of seasonality, climate change, and the poor environmental conditions. The study was carried out in Northern guinea Savannah region of Nigeria because pastoralists have settled there in large numbers due to desertification and low rainfall in the arid regions. A multi-stage sampling procedure was used to arrive at the selection of two states (Kwara and Nassarawa) in the region. A total of 63 respondents were randomly chosen using simple random sampling. Focus group discussions and questionnaire were used to gather information while the data was analysed using content analysis. The facilities required by the sampled households are milking machine, cheese making machine, and preservatives to increase the shelf life of cheese. Whilst, the extension service required are demonstration on cheese making, training and seminars on animal husbandry. Additionally, livestock of pastoralists often encroach on farmers’ plots which usually result in pastoralist-farmer conflicts. The study thus recommends diversification of economic activity from livestock to non-livestock related activities as well as creation of grazing routes to reduce pastoralist/farmer conflict.

Keywords: arid region, coping strategies, livestock, livelihood

Procedia PDF Downloads 390
1752 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 410
1751 Testing the Impact of the Nature of Services Offered on Travel Sites and Links on Traffic Generated: A Longitudinal Survey

Authors: Rania S. Hussein

Abstract:

Background: This study aims to determine the evolution of service provision by Egyptian travel sites and how these services change in terms of their level of sophistication over the period of the study which is ten years. To the author’s best knowledge, this is the first longitudinal study that focuses on an extended time frame of ten years. Additionally, the study attempts to determine the popularity of these websites through the number of links to these sites. Links maybe viewed as the equivalent of a referral or word of mouth but in an online context. Both popularity and the nature of the services provided by these websites are used to determine the traffic on these sites. In examining the nature of services provided, the website itself is viewed as an overall service offering that is composed of different travel products and services. Method: This study uses content analysis in the form of a small scale survey done on 30 Egyptian travel agents’ websites to examine whether Egyptian travel websites are static or dynamic in terms of the services that they provide and whether they provide simple or sophisticated travel services. To determine the level of sophistication of these travel sites, the nature and composition of products and services offered by these sites were first examined. A framework adapted from Kotler (1997) 'Five levels of a product' was used. The target group for this study consists of companies that do inbound tourism. Four rounds of data collection were conducted over a period of 10 years. Two rounds of data collection were made in 2004 and two rounds were made in 2014. Data from the travel agents’ sites were collected over a two weeks period in each of the four rounds. Besides collecting data on features of websites, data was also collected on the popularity of these websites through a software program called Alexa that showed the traffic rank and number of links of each site. Regression analysis was used to test the effect of links and services on websites as independent variables on traffic as the dependent variable of this study. Findings: Results indicate that as companies moved from having simple websites with basic travel information to being more interactive, the number of visitors illustrated by traffic and the popularity of those sites increase as shown by the number of links. Results also show that travel companies use the web much more for promotion rather than for distribution since most travel agents are using it basically for information provision. The results of this content analysis study taps on an unexplored area and provide useful insights for marketers on how they can generate more traffic to their websites by focusing on developing a distinctive content on these sites and also by focusing on the visibility of their sites thus enhancing the popularity or links to their sites.

Keywords: levels of a product, popularity, travel, website evolution

Procedia PDF Downloads 321
1750 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes

Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov

Abstract:

Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.

Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography

Procedia PDF Downloads 326
1749 Separate Collection System of Recyclables and Biowaste Treatment and Utilization in Metropolitan Area Finland

Authors: Petri Kouvo, Aino Kainulainen, Kimmo Koivunen

Abstract:

Separate collection system for recyclable wastes in the Helsinki region was ranked second best of European capitals. The collection system includes paper, cardboard, glass, metals and biowaste. Residual waste is collected and used in energy production. The collection system excluding paper is managed by the Helsinki Region Environmental Services HSY, a public organization owned by four municipalities (Helsinki, Espoo, Kauniainen and Vantaa). Paper collection is handled by the producer responsibility scheme. The efficiency of the collection system in the Helsinki region relies on a good coverage of door-to-door-collection. All properties with 10 or more dwelling units are required to source separate biowaste and cardboard. This covers about 75% of the population of the area. The obligation is extended to glass and metal in properties with 20 or more dwelling units. Other success factors include public awareness campaigns and a fee system that encourages recycling. As a result of waste management regulations for source separation of recyclables and biowaste, nearly 50 percent of recycling rate of household waste has been reached. For households and small and medium size enterprises, there is a sorting station fleet of five stations available. More than 50 percent of wastes received at sorting stations is utilized as material. The separate collection of plastic packaging in Finland will begin in 2016 within the producer responsibility scheme. HSY started supplementing the national bring point system with door-to-door-collection and pilot operations will begin in spring 2016. The result of plastic packages pilot project has been encouraging. Until the end of 2016, over 3500 apartment buildings have been joined the piloting, and more than 1800 tons of plastic packages have been collected separately. In the summer 2015 a novel partial flow digestion process combining digestion and tunnel composting was adopted for source separated household and commercial biowaste management. The product gas form digestion process is converted in to heat and electricity in piston engine and organic Rankine cycle process with very high overall efficiency. This paper describes the efficient collection system and discusses key success factors as well as main obstacles and lessons learned as well as the partial flow process for biowaste management.

Keywords: biowaste, HSY, MSW, plastic packages, recycling, separate collection

Procedia PDF Downloads 217
1748 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model

Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis

Abstract:

Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.

Keywords: artery, drug, nanoparticles, navigation

Procedia PDF Downloads 107