Search results for: traditional terms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10693

Search results for: traditional terms

1273 Investigations Of The Service Life Of Different Material Configurations At Solid-lubricated Rolling Bearings

Authors: Bernd Sauer, Michel Werner, Stefan Emrich, Michael Kopnarski, Oliver Koch

Abstract:

Friction reduction is an important aspect in the context of sustainability and energy transition. Rolling bearings are therefore used in many applications in which components move relative to each other. Conventionally lubricated rolling bearings are used in a wide range of applications, but are not suitable under certain conditions. Conventional lubricants such as grease or oil cannot be used at very high or very low temperatures. In addition, these lubricants evaporate at very low ambient pressure, e.g. in a high vacuum environment, making the use of solid lubricated bearings unavoidable. With the use of solid-lubricated bearings, predicting the service life becomes more complex. While the end of the service life of bearings with conventional lubrication is mainly caused by the failure of the bearing components due to material fatigue, solid-lubricated bearings fail at the moment when the lubrication layer is worn and the rolling elements come into direct contact with the raceway during operation. In order to extend the service life of these bearings beyond the service life of the initial coating, the use of transfer lubrication is recommended, in which pockets or sacrificial cages are used in which the balls run and can thus absorb the lubricant, which is then available for lubrication in tribological contact. This contribution presents the results of wear and service life tests on solid-lubricated rolling bearings with sacrificial cage pockets. The cage of the bearing consists of a polyimide (PI) matrix with 15% molybdenum disulfide (MoS2) and serves as a lubrication depot alongside the silver-coated balls. The bearings are tested under high vacuum (pE < 10-2 Pa) at a temperature of 300 °C on a four-bearing test rig. First, investigations of the bearing system within the bearing service life are presented and the torque curve, the wear mass and surface analyses are discussed. With regard to wear, it can be seen that the bearing rings tend to increase in mass over the service life of the bearing, while the balls and the cage tend to lose mass. With regard to the elementary surface properties, the surfaces of the bearing rings and balls are examined in terms of the mass of the elements on them. Furthermore, service life investigations with different material pairings are presented, whereby the focus here is on the service life achieved in addition to the torque curve, wear development and surface analysis. It was shown that MoS2 in the cage leads to a longer service life, while a silver (Ag) coating on the balls has no positive influence on the service life and even appears to reduce it in combination with MoS2.

Keywords: ball bearings, molybdenum disulfide, solid lubricated bearings, solid lubrication mechanisms

Procedia PDF Downloads 28
1272 The Challenges of Well Integrity on Plug and Abandoned Wells for Offshore Co₂ Storage Site Containment

Authors: Siti Noor Syahirah Mohd Sabri

Abstract:

The oil and gas industry is committed to net zero carbon emissions because the consequences of climate change could be catastrophic unless responded to very soon. One way of reducing CO₂ emissions is to inject it into a depleted reservoir buried underground. This greenhouse gas reduction technique significantly reduces CO₂ released into the atmosphere. In general, depleted oil and gas reservoirs provide readily available sites for the storage of CO₂ in offshore areas. This is mainly due to the hydrocarbons have been optimally produced and the existence of voids for effective CO₂ storage. Hence, make it a good candidate for a CO₂ well injector location. Geological storage sites are often evaluated in terms of capacity, injectivity and containment. Leakage through the cap rock or existing well is the main concern in the depleted fields. In order to develop these fields as CO₂ storage sites, the long-term integrity of wells drilled in these oil & gas fields must be ascertained to ensure good CO₂ containment. Well, integrity is often defined as the ability to contain fluids without significant leakage through the project lifecycle. Most plugged and abandoned (P & A) wells in Peninsular Malaysia have drilled 20 – 30 years ago and were not designed to withstand downhole conditions having >50%vol CO₂ and CO₂/H₂O mixture. In addition, Corrosive-Resistant Alloy (CRA) tubular and CO₂-resistant cement was not used during good construction. The reservoir pressure and temperature conditions may have further degraded the material strength and elevated the corrosion rate. Understanding all the uncertainties that may have affected cement-casing bonds, such as the quality of cement behind the casing, subsidence effect, corrosion rate, etc., is the first step toward well integrity evaluation. Secondly, proper quantification of all the uncertainties involved needs to be done to ensure long-term underground storage objectives of CO₂ are achieved. This paper will discuss challenges associated with estimating the performance of well barrier elements in existing P&A wells. Risk ranking of the existing P&A wells is to be carried out in order to ensure the integrity of the storage site is maintained for long-term CO₂ storage. High-risk existing P&A wells are to be re-entered to restore good integrity and to reduce future leakage that may happen. In addition, the requirement to design a fit-for-purpose monitoring and mitigation technology package for potential CO₂ leakage/seepage in the marine environment will be discussed accordingly. The holistic approach will ensure that the integrity is maintained, and CO₂ is contained underground for years to come.

Keywords: CCUS, well integrity, co₂ storage, offshore

Procedia PDF Downloads 74
1271 The Economic Burden of Mental Disorders: A Systematic Review

Authors: Maria Klitgaard Christensen, Carmen Lim, Sukanta Saha, Danielle Cannon, Finley Prentis, Oleguer Plana-Ripoll, Natalie Momen, Kim Moesgaard Iburg, John J. McGrath

Abstract:

Introduction: About a third of the world’s population will develop a mental disorder over their lifetime. Having a mental disorder is a huge burden in health loss and cost for the individual, but also for society because of treatment cost, production loss and caregivers’ cost. The objective of this study is to synthesize the international published literature on the economic burden of mental disorders. Methods: Systematic literature searches were conducted in the databases PubMed, Embase, Web of Science, EconLit, NHS York Database and PsychInfo using key terms for cost and mental disorders. Searches were restricted to 1980 until May 2019. The inclusion criteria were: (1) cost-of-illness studies or cost-analyses, (2) diagnosis of at least one mental disorder, (3) samples based on the general population, and (4) outcome in monetary units. 13,640 publications were screened by their title/abstract and 439 articles were full-text screened by at least two independent reviewers. 112 articles were included from the systematic searches and 31 articles from snowball searching, giving a total of 143 included articles. Results: Information about diagnosis, diagnostic criteria, sample size, age, sex, data sources, study perspective, study period, costing approach, cost categories, discount rate and production loss method and cost unit was extracted. The vast majority of the included studies were from Western countries and only a few from Africa and South America. The disorder group most often investigated was mood disorders, followed by schizophrenia and neurotic disorders. The disorder group least examined was intellectual disabilities, followed by eating disorders. The preliminary results show a substantial variety in the used perspective, methodology, costs components and outcomes in the included studies. An online tool is under development enabling the reader to explore the published information on costs by type of mental disorder, subgroups, country, methodology, and study quality. Discussion: This is the first systematic review synthesizing the economic cost of mental disorders worldwide. The paper will provide an important and comprehensive overview over the economic burden of mental disorders, and the output from this review will inform policymaking.

Keywords: cost-of-illness, health economics, mental disorders, systematic review

Procedia PDF Downloads 114
1270 Influence of High-Resolution Satellites Attitude Parameters on Image Quality

Authors: Walid Wahballah, Taher Bazan, Fawzy Eltohamy

Abstract:

One of the important functions of the satellite attitude control system is to provide the required pointing accuracy and attitude stability for optical remote sensing satellites to achieve good image quality. Although offering noise reduction and increased sensitivity, time delay and integration (TDI) charge coupled devices (CCDs) utilized in high-resolution satellites (HRS) are prone to introduce large amounts of pixel smear due to the instability of the line of sight. During on-orbit imaging, as a result of the Earth’s rotation and the satellite platform instability, the moving direction of the TDI-CCD linear array and the imaging direction of the camera become different. The speed of the image moving on the image plane (focal plane) represents the image motion velocity whereas the angle between the two directions is known as the drift angle (β). The drift angle occurs due to the rotation of the earth around its axis during satellite imaging; affecting the geometric accuracy and, consequently, causing image quality degradation. Therefore, the image motion velocity vector and the drift angle are two important factors used in the assessment of the image quality of TDI-CCD based optical remote sensing satellites. A model for estimating the image motion velocity and the drift angle in HRS is derived. The six satellite attitude control parameters represented in the derived model are the (roll angle φ, pitch angle θ, yaw angle ψ, roll angular velocity φ֗, pitch angular velocity θ֗ and yaw angular velocity ψ֗ ). The influence of these attitude parameters on the image quality is analyzed by establishing a relationship between the image motion velocity vector, drift angle and the six satellite attitude parameters. The influence of the satellite attitude parameters on the image quality is assessed by the presented model in terms of modulation transfer function (MTF) in both cross- and along-track directions. Three different cases representing the effect of pointing accuracy (φ, θ, ψ) bias are considered using four different sets of pointing accuracy typical values, while the satellite attitude stability parameters are ideal. In the same manner, the influence of satellite attitude stability (φ֗, θ֗, ψ֗) on image quality is also analysed for ideal pointing accuracy parameters. The results reveal that cross-track image quality is influenced seriously by the yaw angle bias and the roll angular velocity bias, while along-track image quality is influenced only by the pitch angular velocity bias.

Keywords: high-resolution satellites, pointing accuracy, attitude stability, TDI-CCD, smear, MTF

Procedia PDF Downloads 385
1269 The Development of Modernist Chinese Architecture from the Perspective of Cultural Regionalism in Taiwan: Spatial Practice by the Fieldoffice Architects

Authors: Yilei Yu

Abstract:

Modernism, emerging in the Western world of the 20th century, attempted to create a universal international style, which pulled the architectural and social systems created by classicism back to an initial pure state. However, out of the introspection of the Modernism, Regionalism attempted to restore a humanistic environment and create flexible buildings during the 1950s. Meanwhile, as the first generation of architects came back, the wind of the Regionalism blew to Taiwan. However, with the increasing of political influence and the tightening of free creative space, from the second half of the 1950s to the 1980s, the "real" Regional Architecture, which should have taken roots in Taiwan, becomes the "fake" Regional Architecture filled with the oriental charm. Through the Comparative Method, which includes description, interpretation, juxtaposition, and comparison, this study analyses the difference of the style of the Modernist Chinese Architecture between the period before the 1980s and the after. The paper aims at exploring the development of Regionalism Architecture in Taiwan, which includes three parts. First, the burgeoning period of the "modernist Chinese architecture" in Taiwan was the beginning of the Chinese Nationalist Party's coming to Taiwan to consolidate political power. The architecture of the "Ming and Qing Dynasty Palace Revival Style" dominated the architectural circles in Taiwan. These superficial "regional buildings" have nearly no combination with the local customs of Taiwan, which is difficult to evoke the social identity. Second, in the late 1970s, the second generation of architects headed by Baode Han began focusing on the research and preservation of traditional Taiwanese architecture, and creating buildings combined the terroirs of Taiwan through the imitation of styles. However, some scholars have expressed regret that very few regionalist architectural works that appeared in the 1980s can respond specifically to regional conditions and forms of construction. Instead, most of them are vocabulary-led representations. Third, during the 1990s, by the end of the period of martial law, community building gradually emerged, which made the object of Taiwan's architectural concern gradually extended to the folk and ethnic groups. In the Yilan area, there are many architects who care about the local environment, such as the Field office Architects. Compared with the hollow regionality of the passionate national spirits that emerged during the martial law period, the local practice of the architect team in Yilan can better link the real local environmental life and reflect the true regionality. In conclusion, with the local practice case of the huge construction team in Yilan area, this paper focuses on the Spatial Practice by the Field office Architects to explore the spatial representation of the space and the practical enlightenment in the process of modernist Chinese architecture development in Taiwan.

Keywords: regionalism, modernism, Chinese architecture, political landscape, spatial representation

Procedia PDF Downloads 112
1268 Nigeria’s Terrorists RehabIlitation And Reintegration Policy: A Victimological Perspective

Authors: Ujene Ikem Godspower

Abstract:

Acts of terror perpetrated either by state or non-state actors are considered a social ill and impugn on the collective well-being of the society. As such, there is the need for social reparations, which is meant to ensure the healing of the social wounds resulting from the atrocities committed by errant individuals under different guises. In order to ensure social closure and effectively repair the damages done by anomic behaviors, society must ensure that justice is served and those whose rights and privileges have been denied and battered are given the necessary succour they deserve. With regards to the ongoing terrorism in the Northeast, the moves to rehabilitate and reintegrate Boko Haram members have commenced with the establishment of Operation Safe Corridor,1 and a proposed bill for the establishment of “National Agency for the Education, Rehabilitation, De-radicalisation and Integration of Repentant Insurgents in Nigeria”2. All of which Nigerians have expressed mixed feelings about. Some argue that the endeavor is lacking in ethical decency and justice and totally insults human reasoning. Terrorism and counterterrorism in Nigeria have been enmeshed in gross human rights violations both by the military and the terrorists, and this raises the concern of Nigeria’s ability to fairly and justiciably implement the deradicalization and reintegration efforts. On the other hand, there is the challenge of the community dwellers that are victims of terrorism and counterterrorism and their ability to forgive and welcome back their immediate-past tormentors even with the slightest sense of injustice in the process of terrorists reintegration and rehabilitation. With such efforts implemented in other climes, the Nigeria’s case poses a unique challenge and commands keen interests by stakeholders and the international community due to the aforementioned reasons. It is therefore pertinent to assess the communities’ level of involvement in the cycle of reintegration- hence, the objective of this paper. Methodologically as a part of my larger PhD thesis, this study intends to explore the three different local governments (Michika in Adamawa, Chibok in Borno, and Yunusari in Yobe), all based on the intensity of terrorists attacks. Twenty five in-depth interview will be conducted in the study locations above featuring religious leaders, Community (traditional) leaders, Internally displaced persons, CSOs management officials, and ex-Boko Haram insurgents who have been reintegrated. The data that will be generated from field work will be analyzed using the Nvivo-12 software package, which will help to code and create themes based on the study objectives. Furthermore, the data will be content-analyzed, employing verbatim quotations where necessary. Ethically, the study will take into consideration the basic ethical principles for research of this nature. It will strictly adhere to the principle of voluntary participation, anonymity, and confidentiality.

Keywords: boko haram, reintegration, rehabilitation, terrorism, victimology

Procedia PDF Downloads 223
1267 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 88
1266 Curcumin and Its Analogues: Potent Natural Antibacterial Compounds against Staphylococcus aureus

Authors: Prince Kumar, Shamseer Kulangara Kandi, Diwan S. Rawat, Kasturi Mukhopadhyay

Abstract:

Staphylococcus aureus is the most pathogenic of all staphylococci, a major cause of nosocomial infections, and known for acquiring resistance towards various commonly used antibiotics. Due to the widespread use of synthetic drugs, clinicians are now facing a serious threat in healthcare. The increasing resistance in staphylococci has created a need for alternatives to these synthetic drugs. One of the alternatives is a natural plant-based medicine for both disease prevention as well as the treatment of chronic diseases. Among such natural compounds, curcumin is one of the most studied molecules and has been an integral part of traditional medicines and Ayurveda from ancient times. It is a natural polyphenolic compound with diverse pharmacological effects, including anti-inflammatory, antioxidant, anti-cancerous and antibacterial activities. In spite of its efficacy and potential, curcumin has not been approved as a therapeutic agent yet, because of its low solubility, low bioavailability, and rapid metabolism in vivo. The presence of central β-diketone moiety in curcumin is responsible for its rapid metabolism. To overcome this, in the present study, curcuminoids were designed by modifying the central β-diketone moiety of curcumin into mono carbonyl moiety and their antibacterial potency against S. aureus ATCC 29213 was determined. Further, the mode of action and hemolytic activity of the most potent curcuminoids were studied. Minimum inhibitory concentration (MIC) and in vitro killing kinetics were used to study the antibacterial activity of the designed curcuminoids. For hemolytic assay, mouse Red blood cells were incubated with curcuminoids and hemoglobin release was measured spectrophotometrically. The mode of action of curcuminoids was analysed by membrane depolarization assay using membrane potential sensitive dye 3,3’-dipropylthiacarbocyanine iodide (DiSC3(5)) through spectrofluorimetry and membrane permeabilization assay using calcein-AM through flow cytometry. Antibacterial screening of the designed library (61 curcuminoids) revealed excellent in vitro potency of six compounds against S. aureus (MIC 8 to 32 µg/ml). Moreover, these six compounds were found to be non-hemolytic up to 225 µg/ml that is much higher than their corresponding MIC values. The in vitro killing kinetics data showed five of these lead compounds to be bactericidal causing >3 log reduction in the viable cell count within 4 hrs at 5 × MIC while the sixth compound was found to be bacteriostatic. Depolarization assay revealed that all the six curcuminoids caused depolarization in their corresponding MIC range. Further, the membrane permeabilization assay showed that all the six curcuminoids caused permeabilization at 5 × MIC in 2 hrs. This membrane depolarization and permeabilization caused by curcuminoids found to be in correlation with their corresponding killing efficacy. Both these assays point out that membrane perturbations might be a primary mode of action for these curcuminoids. Overall, the present study leads us six water soluble, non-hemolytic, membrane-active curcuminoids and provided an impetus for further research on therapeutic use of these lead curcuminoids against S. aureus.

Keywords: antibacterial, curcumin, minimum inhibitory concentration , Staphylococcus aureus

Procedia PDF Downloads 155
1265 The Sources of Anti-Immigrant Sentiments in Russia

Authors: Anya Glikman, Anastasia Gorodzeisky

Abstract:

Since the late 1990th labor immigration and its consequences on the society have become one of the most frequently discussed and debated issues in Russia. Social scientists point that the negative attitudes towards immigrants among Russian majority population is widespread, and their level, at least, twice as high as their level in most other European countries. Moreover, recent study by Gorodzeisky, Glikman and Maskyleison (2014) demonstrates that the two sets of individual level predictors of anti-foreigner sentiment – socio-economic status and conservative views and ideologies – that have been repeatedly proved in research in Western countries are not effective in predicting of anti-foreigner sentiment in Post-Socialist Russia. Apparently, the social mechanisms underlying anti-foreigner sentiment in Western countries, which are characterized by stable regimes and relatively long immigration histories, do not play a significant role in the explanation of anti-foreigner sentiment in Post-Socialist Russia. The present study aims to examine alternative possible sources of anti-foreigner sentiment in Russia while controlling for socio-economic position of individuals and conservative views. More specifically, following the research literature on the topic worldwide, we aim to examine whether and to what extent human values (such as tradition, universalism, safety and power), ethnic residential segregation, fear of crime and exposure to mass media affect anti-foreigner sentiments in Russia. To do so, we estimate a series of multivariate regression equations using the data obtained from 2012 European Social Survey. The national representative sample consists of 2337 Russian born respondents. Descriptive results reveal that about 60% percent of Russians view the impact of immigrants on the country in negative terms. Further preliminary analysis show that anti-foreigner sentiments are associated with exposer to mass media as well as with fear of crime. Specifically, respondents who devoted more time watching news on TV channels and respondents who express higher levels of fear of crime tend to report higher levels of anti-immigrants sentiments. The findings would be discussed in light of sociological perspective and the context of Russian society.

Keywords: anti-immigrant sentiments, fear of crime, human values, mass media, Russia

Procedia PDF Downloads 442
1264 Organizational Stress in Women Executives

Authors: Poornima Gupta, Sadaf Siraj

Abstract:

The study examined the organizational causes of organizational stress in women executives and entrepreneurs in India. This was done so that mediation strategies could be developed to combat the organizational stress experienced by them, in order to retain the female employees as well as attract quality talent. The data for this research was collected through the self- administered survey, from the women executives across various industries working at different levels in management. The research design of the study was descriptive and cross-sectional. It was carried out through a self-administered questionnaire filled in by the women executives and entrepreneurs in the NCR region. Multistage sampling involving stratified random sampling was employed. A total of 1000 questionnaires were distributed out of which 450 were returned and after cleaning the data 404 were fit to be considered for analyses. The overall findings of the study suggested that there were various job-related factors that induce stress. Fourteen factors were identified which were a major cause of stress among the working women by applying Factor analysis. The study also assessed the demographic factors which influence the stress in women executives across various industries. The findings show that the women, no doubt, were stressed by organizational factors. The mean stress score was 153 (out of a possible score of 196) indicating high stress. There appeared to be an inverse relationship between the marital status, age, education, work experience, and stress. Married women were less stressed compared to single women employees. Similarly, female employees 29 years or younger experienced more stress at work. Women having education up to 12th standard or less were more stressed compared to graduates and post graduates. Women who had spent more than two years in the same organization perceived more stress compared to their counterparts. Family size and income, interestingly, had no significant impact on stress. The study also established that the level of stress experienced by women across industries differs considerably. Banking sector emerged as the industry where the women experienced the most stress followed by Entrepreneurs, Medical, BPO, Advertising, Government, Academics, and Manufacturing, in that order. The results contribute to the better understanding of the personal and economic factors surrounding job stress and working women. It concludes that the organizations need to be sensitive to the women’s needs. Organizations are traditionally designed around men with the rules made by the men for the men. Involvement of women in top positions, decision making, would make them feel more useful and less stressed. The invisible glass ceiling causes more stress than realized among women. Less distinction between the men and women colleagues in terms of giving responsibilities, involvement in decision making, framing policies, etc. would go a long way to reduce stress in women.

Keywords: women, stress, gender in management, women in management

Procedia PDF Downloads 238
1263 Using GIS and AHP Model to Explore the Parking Problem in Khomeinishahr

Authors: Davood Vatankhah, Reza Mokhtari Malekabadi, Mohsen Saghaei

Abstract:

Function of urban transportation systems depends on the existence of the required infrastructures, appropriate placement of different components, and the cooperation of these components with each other. Establishing various neighboring parking spaces in city neighborhood in order to prevent long-term and inappropriate parking of cars in the allies is one of the most effective operations in reducing the crowding and density of the neighborhoods. Every place with a certain application attracts a number of daily travels which happen throughout the city. A large percentage of the people visiting these places go to these travels by their own cars; therefore, they need a space to park their cars. The amount of this need depends on the usage function and travel demand of the place. The study aims at investigating the spatial distribution of the public parking spaces, determining the effective factors in locating, and their combination in GIS environment in Khomeinishahr of Isfahan city. Ultimately, the study intends to create an appropriate pattern for locating parking spaces, determining the request for parking spaces of the traffic areas, choosing the proper places for providing the required public parking spaces, and also proposing new spots in order to promote quality and quantity aspects of the city in terms of enjoying public parking spaces. Regarding the method, the study is based on applied purpose and regarding nature, it is analytic-descriptive. The population of the study includes people of the center of Khomeinishahr which is located on Northwest of Isfahan having about 5000 hectares of geographic area and the population of 241318 people are in the center of Komeinishahr. In order to determine the sample size, Cochran formula was used and according to the population of 26483 people of the studied area, 231 questionnaires were used. Data analysis was carried out by usage of SPSS software and after estimating the required space for parking spaces, initially, the effective criteria in locating the public parking spaces are weighted by the usage of Analytic Hierarchical Process in the Arc GIS software. Then, appropriate places for establishing parking spaces were determined by fuzzy method of Order Weighted Average (OWA). The results indicated that locating of parking spaces in Khomeinishahr have not been carried out appropriately and per capita of the parking spaces is not desirable in relation to the population and request; therefore, in addition to the present parking lots, 1434 parking lots are needed in the area of the study for each day; therefore, there is not a logical proportion between parking request and the number of parking lots in Khomeinishahr.

Keywords: GIS, locating, parking, khomeinishahr

Procedia PDF Downloads 286
1262 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 314
1261 Modelling High Strain Rate Tear Open Behavior of a Bilaminate Consisting of Foam and Plastic Skin Considering Tensile Failure and Compression

Authors: Laura Pytel, Georg Baumann, Gregor Gstrein, Corina Klug

Abstract:

Premium cars often coat the instrument panels with a bilaminate consisting of a soft foam and a plastic skin. The coating is torn open during the passenger airbag deployment under high strain rates. Characterizing and simulating the top coat layer is crucial for predicting the attenuation that delays the airbag deployment, effecting the design of the restrain system and to reduce the demand of simulation adjustments through expensive physical component testing.Up to now, bilaminates used within cars either have been modelled by using a two-dimensional shell formulation for the whole coating system as one which misses out the interaction of the two layers or by combining a three-dimensional formulation foam layer with a two-dimensional skin layer but omitting the foam in the significant parts like the expected tear line area and the hinge where high compression is expected. In both cases, the properties of the coating causing the attenuation are not considered. Further, at present, the availability of material information, as there are failure dependencies of the two layers, as well as the strain rate of up to 200 1/s, are insufficient. The velocity of the passenger airbag flap during an airbag shot has been measured with about 11.5 m/s during first ripping; the digital image correlation evaluation showed resulting strain rates of above 1500 1/s. This paper provides a high strain rate material characterization of a bilaminate consisting of a thin polypropylene foam and a thermoplasctic olefins (TPO) skin and the creation of validated material models. With the help of a Split Hopkinson tension bar, strain rates of 1500 1/s were within reach. The experimental data was used to calibrate and validate a more physical modelling approach of the forced ripping of the bilaminate. In the presented model, the three-dimensional foam layer is continuously tied to the two-dimensional skin layer, allowing failure in both layers at any possible position. The simulation results show a higher agreement in terms of the trajectory of the flaps and its velocity during ripping. The resulting attenuation of the airbag deployment measured by the contact force between airbag and flaps increases and serves usable data for dimensioning modules of an airbag system.

Keywords: bilaminate ripping behavior, High strain rate material characterization and modelling, induced material failure, TPO and foam

Procedia PDF Downloads 57
1260 Pre-Industrial Local Architecture According to Natural Properties

Authors: Selin Küçük

Abstract:

Pre-industrial architecture is integration of natural and subsequent properties by intelligence and experience. Since various settlements relatively industrialized or non-industrialized at any time, ‘pre-industrial’ term does not refer to a definite time. Natural properties, which are existent conditions and materials in natural local environment, are climate, geomorphology and local materials. Subsequent properties, which are all anthropological comparatives, are culture of societies, requirements of people and construction techniques that people use. Yet, after industrialization, technology took technique’s place, cultural effects are manipulated, requirements are changed and local/natural properties are almost disappeared in architecture. Technology is universal, global and expands simply; conversely technique is time and experience dependent and should has a considerable cultural background. This research is about construction techniques according to natural properties of a region and classification of these techniques. Understanding local architecture is only possible by searching its background which is hard to reach. There are always changes in positive and negative in architectural techniques through the time. Archaeological layers of a region sometimes give more accurate information about transformation of architecture. However, natural properties of any region are the most helpful elements to perceive construction techniques. Many international sources from different cultures are interested in local architecture by mentioning natural properties separately. Unfortunately, there is no literature deals with this subject as far as systematically in the correct way. This research aims to improve a clear perspective of local architecture existence by categorizing archetypes according to natural properties. The ultimate goal of this research is generating a clear classification of local architecture independent from subsequent (anthropological) properties over the world such like a handbook. Since local architecture is the most sustainable architecture with refer to its economic, ecologic and sociological properties, there should be an excessive information about construction techniques to be learned from. Constructing the same buildings in all over the world is one of the main criticism of modern architectural system. While this critics going on, the same buildings without identity increase incrementally. In post-industrial term, technology widely took technique’s place, yet cultural effects are manipulated, requirements are changed and natural local properties are almost disappeared in architecture. These study does not offer architects to use local techniques, but it indicates the progress of pre-industrial architectural evolution which is healthier, cheaper and natural. Immigration from rural areas to developing/developed cities should be prohibited, thus culture and construction techniques can be preserved. Since big cities have psychological, sensational and sociological impact on people, rural settlers can be convinced to not to immigrate by providing new buildings designed according to natural properties and maintaining their settlements. Improving rural conditions would remove the economical and sociological gulf between cities and rural. What result desired to arrived in, is if there is no deformation (adaptation process of another traditional buildings because of immigration) or assimilation in a climatic region, there should be very similar solutions in the same climatic regions of the world even if there is no relationship (trade, communication etc.) among them.

Keywords: climate zones, geomorphology, local architecture, local materials

Procedia PDF Downloads 411
1259 Commodifying Things Past: Comparative Study of Heritage Tourism Practices in Montenegro and Serbia

Authors: Jovana Vukcevic, Sanja Pekovic, Djurdjica Perovic, Tatjana Stanovcic

Abstract:

This paper presents a critical inquiry into the role of uncomfortable heritage in nation branding with the particular focus on the specificities of the politics of memory, forgetting and revisionism in the post-communist post-Yugoslavia. It addresses legacies of unwanted, ambivalent or unacknowledged past and different strategies employed by the former-Yugoslav states and private actors in “rebranding” their heritage, ensuring its preservation, but re-contextualizing the narrative of the past through contemporary tourism practices. It questions the interplay between nostalgia, heritage and market, and the role of heritage in polishing the history of totalitarian and authoritarian regimes in the Balkans. It argues that in post-socialist Yugoslavia, the necessity to limit correlations with former ideology and the use of the commercial brush in shaping a marketable version of the past instigated the emergence of the profit-oriented heritage practices. Building on that argument, the paper addresses these issues as “commodification” and “disneyfication” of Balkans’ ambivalent heritage, contributing to the analysis of changing forms of memorialisation and heritagization practices in Europe. It questions the process of ‘coming to terms with the past’ through marketable forms of heritage tourism, fetching the boundary between market-driven nostalgia and state-imposed heritage policies. In order to analyse plurality of ways of dealing with controversial, ambivalent and unwanted heritage of dictatorships in the Balkans, the paper considers two prominent examples of heritage commodification in Serbia and Montenegro, and the re-appropriations of those narratives for the nation branding purposes. The first one is the story of the Tito’s Blue Train, the landmark of the socialist past and the symbol of Yugoslavia which has nowadays being used for birthday parties and marriage celebrations, while the second emphasises the unusual business arrangement turning the fortress Mamula, former concentration camp through the Second World War, into a luxurious Mediterranean resort. Questioning how the ‘uneasy’ past was acknowledged and embedded into the official heritage institutions and tourism practices, study examines the changing relation towards the legacies of dictatorships, inviting us to rethink the economic models of the things past. Analysis of these processes should contribute to better understanding of the new mnemonics strategies and (converging?) ways of ‘doing’ past in Europe.

Keywords: commodification, heritage tourism, totalitarianism, Serbia, Montenegro

Procedia PDF Downloads 231
1258 Preparation of Activated Carbon From Waste Feedstock: Activation Variables Optimization and Influence

Authors: Oluwagbemi Victor Aladeokin

Abstract:

In the last decade, the global peanut cultivation has seen increased demand, which is attributed to their health benefits, rising to ~ 41.4 MMT in 2019/2020. Peanut and other nutshells are considered as waste in various parts of the world and are usually used for their fuel value. However, this agricultural by-product can be converted to a higher value product such as activated carbon. For many years, due to the highly porous structure of activated carbon, it has been widely and effectively used as an adsorbent in the purification and separation of gases and liquids. Those used for commercial purposes are primarily made from a range of precursors such as wood, coconut shell, coal, bones, etc. However, due to difficulty in regeneration and high cost, various agricultural residues such as rice husk, corn stalks, apricot stones, almond shells, coffee beans, etc, have been explored to produce activated carbons. In the present study, the potential of peanut shells as precursors in the production of activated carbon and their adsorption capacity is investigated. Usually, precursors used to produce activated carbon have carbon content above 45 %. A typical raw peanut shell has 42 wt.% carbon content. To increase the yield, this study has employed chemical activation method using zinc chloride. Zinc chloride is well known for its effectiveness in increasing porosity of porous carbonaceous materials. In chemical activation, activation temperature and impregnation ratio are parameters commonly reported to be the most significant, however, this study has also studied the influence of activation time on the development of activated carbon from peanut shells. Activated carbons are applied for different purposes, however, as the application of activated carbon becomes more specific, an understanding of the influence of activation variables to have a better control of the quality of the final product becomes paramount. A traditional approach to experimentally investigate the influence of the activation parameters, involves varying each parameter at a time. However, a more efficient way to reduce the number of experimental runs is to apply design of experiment. One of the objectives of this study is to optimize the activation variables. Thus, this work has employed response surface methodology of design of experiment to study the interactions between the activation parameters and consequently optimize the activation parameters (temperature, impregnation ratio, and activation time). The optimum activation conditions found were 485 °C, 15 min and 1.7, temperature, activation time, and impregnation ratio respectively. The optimum conditions resulted in an activated carbon with relatively high surface area ca. 1700 m2/g, 47 % yield, relatively high density, low ash, and high fixed carbon content. Impregnation ratio and temperature were found to mostly influence the final characteristics of the produced activated carbon from peanut shells. The results of this study, using response surface methodology technique, have revealed the potential and the most significant parameters that influence the chemical activation process, of peanut shells to produce activated carbon which can find its use in both liquid and gas phase adsorption applications.

Keywords: chemical activation, fixed carbon, impregnation ratio, optimum, surface area

Procedia PDF Downloads 124
1257 Study of the Hydrodynamic of Electrochemical Ion Pumping for Lithium Recovery

Authors: Maria Sofia Palagonia, Doriano Brogioli, Fabio La Mantia

Abstract:

In the last decade, lithium has become an important raw material in various sectors, in particular for rechargeable batteries. Its production is expected to grow more and more in the future, especially for mobile energy storage and electromobility. Until now it is mostly produced by the evaporation of water from salt lakes, which led to a huge water consumption, a large amount of waste produced and a strong environmental impact. A new, clean and faster electrochemical technique to recover lithium has been recently proposed: electrochemical ion pumping. It consists in capturing lithium ions from a feed solution by intercalation in a lithium-selective material, followed by releasing them into a recovery solution; both steps are driven by the passage of a current. In this work, a new configuration of the electrochemical cell is presented, used to study and optimize the process of the intercalation of lithium ions through the hydrodynamic condition. Lithium Manganese Oxide (LiMn₂O₄) was used as a cathode to intercalate lithium ions selectively during the reduction, while Nickel Hexacyano Ferrate (NiHCF), used as an anode, releases positive ion. The effect of hydrodynamics on the process has been studied by conducting the experiments at various fluxes of the electrolyte through the electrodes, in terms of charge circulated through the cell, captured lithium per unit mass of material and overvoltage. The result shows that flowing the electrolyte inside the cell improves the lithium capture, in particular at low lithium concentration. Indeed, in Atacama feed solution, at 40 mM of lithium, the amount of lithium captured does not increase considerably with the flux of the electrolyte. Instead, when the concentration of the lithium ions is 5 mM, the amount of captured lithium in a single capture cycle increases by increasing the flux, thus leading to the conclusion that the slowest step in the process is the transport of the lithium ion in the liquid phase. Furthermore, an influence of the concentration of other cations in solution on the process performance was observed. In particular, the capturing of the lithium using a different concentration of NaCl together with 5 mM of LiCl was performed, and the results show that the presence of NaCl limits the amount of the captured lithium. Further studies can be performed in order to understand why the full capacity of the material is not reached at the highest flow rate. This is probably due to the porous structure of the material since the liquid phase is likely not affected by the convection flow inside the pores. This work proves that electrochemical ion pumping, with a suitable hydrodynamic design, enables the recovery of lithium from feed solutions at the lower concentration than the sources that are currently exploited, down to 1 mM.

Keywords: desalination battery, electrochemical ion pumping, hydrodynamic, lithium

Procedia PDF Downloads 194
1256 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 164
1255 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India

Authors: Richa Raje, Anumol Antony

Abstract:

Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.

Keywords: children, landscape design, learning environment, nature and play, outdoor play

Procedia PDF Downloads 108
1254 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 561
1253 Status of Vocational Education and Training in India: Policies and Practices

Authors: Vineeta Sirohi

Abstract:

The development of critical skills and competencies becomes imperative for young people to cope with the unpredicted challenges of the time and prepare for work and life. Recognizing that education has a critical role in reaching sustainability goals as emphasized by 2030 agenda for sustainability development, educating youth in global competence, meta-cognitive competencies, and skills from the initial stages of formal education are vital. Further, educating for global competence would help in developing work readiness and boost employability. Vocational education and training in India as envisaged in various policy documents remain marginalized in practice as compared to general education. The country is still far away from the national policy goal of tracking 25% of the secondary students at grade eleven and twelve under the vocational stream. In recent years, the importance of skill development has been recognized in the present context of globalization and change in the demographic structure of the Indian population. As a result, it has become a national policy priority and taken up with renewed focus by the government, which has set the target of skilling 500 million people by 2022. This paper provides an overview of the policies, practices, and current status of vocational education and training in India supported by statistics from the National Sample Survey, the official statistics of India. The national policy documents and annual reports of the organizations actively involved in vocational education and training have also been examined to capture relevant data and information. It has also highlighted major initiatives taken by the government to promote skill development. The data indicates that in the age group 15-59 years, only 2.2 percent reported having received formal vocational training, and 8.6 percent have received non-formal vocational training, whereas 88.3 percent did not receive any vocational training. At present, the coverage of vocational education is abysmal as less than 5 percent of the students are covered by the vocational education programme. Besides, launching various schemes to address the mismatch of skills supply and demand, the government through its National Policy on Skill Development and Entrepreneurship 2015 proposes to bring about inclusivity by bridging the gender, social and sectoral divide, ensuring that the skilling needs of socially disadvantaged and marginalized groups are appropriately addressed. It is fundamental that the curriculum is aligned with the demands of the labor market, incorporating more of the entrepreneur skills. Creating nonfarm employment opportunities for educated youth will be a challenge for the country in the near future. Hence, there is a need to formulate specific skill development programs for this sector and also programs for upgrading their skills to enhance their employability. There is a need to promote female participation in work and in non-traditional courses. Moreover, rigorous research and development of a robust information base for skills are required to inform policy decisions on vocational education and training.

Keywords: policy, skill, training, vocational education

Procedia PDF Downloads 129
1252 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group

Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb

Abstract:

Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.

Keywords: broadband services, customer experience quality, loyalty, net promoters score

Procedia PDF Downloads 251
1251 Multilocal Youth and the Berlin Digital Industry: Productive Leisure as a Key Factor in European Migration

Authors: Stefano Pelaggi

Abstract:

The research is focused on youth labor and mobility in Berlin. Mobility has become a common denominator in our daily lives but it does not primarily move according to monetary incentives. Labor, knowledge and leisure overlap on this point as cities are trying to attract people who could participate in production of the innovations while the new migrants are experiencing the lifestyle of the host cities. The research will present the project of empirical study focused on Italian workers in the digital industry in Berlin, trying to underline the connection between pleasure, leisure with the choice of life abroad. Berlin has become the epicenter of the European Internet start-up scene, but people suitable to work for digital industries are not moving in Berlin to make a career, most of them are attracted to the city for different reasons. This point makes a clear exception to traditional migration flows, which are always originated from a specific search of employment opportunities or strong ties, usually families, in a place that could guarantee success in finding a job. Even the skilled migration has always been originated from a specific need, finding the right path for a successful professional life. In a society where the lack of free time in our calendar seems to be something to be ashamed, the actors of youth mobility incorporate some categories of experiential tourism within their own life path. Professional aspirations, lifestyle choices of the protagonists of youth mobility are geared towards meeting the desires and aspirations that define leisure. While most of creative work places, in particular digital industries, uses the category of fun as a primary element of corporate policy, virtually extending the time to work for the whole day; more and more people around the world are deciding their path in life, career choices on the basis of indicators linked to the realization of the self, which may include factors like a warm climate, cultural environment. All indicators that are usually eradicated from the hegemonic approach to labor. The interpretative framework commonly used seems to be mostly focused on a dualism between Florida's theories and those who highlight the absence of conflict in his studies. While the flexibility of the new creative industries is minimizing leisure, incorporating elements of leisure itself in work activities, more people choose their own path of life by placing great importance to basic needs, through a gaze on pleasure that is only partially driven by consumption. The multi localism is the co-existence of different identities and cultures that do not conflict because they reject the bind on territory. Local loses its strength of opposition to global, with an attenuation of the whole concept of citizenship, territory and even integration. A similar perspective could be useful to search a new approach to all the studies dedicated to the gentrification process, while studying the new migrations flow.

Keywords: brain drain, digital industry, leisure and gentrification, multi localism

Procedia PDF Downloads 225
1250 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning

Authors: Rajkumar Ghosh

Abstract:

Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.

Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery

Procedia PDF Downloads 67
1249 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 168
1248 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 141
1247 Informal Green Infrastructure as Mobility Enabler in Informal Settlements of Quito

Authors: Ignacio W. Loor

Abstract:

In the context of informal settlements in Quito, this paper provides evidence that slopes and deep ravines typical of Andean cities, around which marginalized urban communities sit, constitute a platform for green infrastructure that supports mobility for pedestrians in an incremental fashion. This is informally shaped green infrastructure that provides connectivity to other mobility infrastructures such as roads and public transport, which permits relegated dwellers reach their daily destinations and reclaim their rights to the city. This is relevant in that walking has been increasingly neglected as a viable mean of transport in Latin American cities, in favor of rather motorized means, for which the mobility benefits of green infrastructure have remained invisible to policymakers, contributing to the progressive isolation of informal settlements. This research leverages greatly on an ecological rejuvenation programme led by the municipality of Quito and the Andean Corporation for Development (CAN) intended for rehabilitating the ecological functionalities of ravines. Accordingly, four ravines in different stages of rejuvenation were chosen, in order to through ethnographic methods, capture the practices they support to dwellers of informal settlements across different stages, particularly in terms of issues of mobility. Then, by presenting fragments of interviews, description of observed phenomena, photographs and narratives published in institutional reports and media, the production process of mobility infrastructure over unoccupied slopes and ravines, and the roles that this infrastructure plays in the mobility of dwellers and their quotidian practices are explained. For informal settlements, which normally feature scant urban infrastructure, mobility embodies an unfavourable driver for the possibilities of dwellers to actively participate in the social, economic and political dimensions of the city, for which their rights to the city are widely neglected. Nevertheless, informal green infrastructure for mobility provides some alleviation. This infrastructure is incremental, since its features and usability gradually evolves as users put into it knowledge, labour, devices, and connectivity to other infrastructures in different dimensions which increment its dependability. This is evidenced in the diffusion of knowledge of trails and routes of footpaths among users, the implementation of linking stairs and bridges, the improved access by producing public spaces adjacent to the ravines, the illuminating of surrounding roads, and ultimately, the restoring of ecological functions of ravines. However, the perpetuity of this type of infrastructure is also fragile and vulnerable to the course of urbanisation, densification, and expansion of gated privatised spaces.

Keywords: green infrastructure, informal settlements, urban mobility, walkability

Procedia PDF Downloads 136
1246 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 76
1245 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport

Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene

Abstract:

Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.

Keywords: bullying, focus group, harassment, narrative, sport, qualitative research

Procedia PDF Downloads 162
1244 Fahr Dsease vs Fahr Syndrome in the Field of a Case Report

Authors: Angelis P. Barlampas

Abstract:

Objective: The confusion of terms is a common practice in many situations of the everyday life. But, in some circumstances, such as in medicine, the precise meaning of a word curries a critical role for the health of the patient. Fahr disease and Fahr syndrome are often falsely used interchangeably, but they are two different conditions with different physical histories of different etiology and different medical management. A case of the seldom Fahr disease is presented, and a comparison with the more common Fahr syndrome follows. Materials and method: A 72 years old patient came to the emergency department, complaining of some kind of non specific medal disturbances, like anxiety, difficulty of concentrating, and tremor. The problems had a long course, but he had the impression of getting worse lately, so he decided to check them. Past history and laboratory tests were unremarkable. Then, a computed tomography examination was ordered. Results: The CT exam showed bilateral, hyperattenuating areas of heavy, dense calcium type deposits in basal ganglia, striatum, pallidum, thalami, the dentate nucleus, and the cerebral white matter of frontal, parietal and iniac lobes, as well as small areas of the pons. Taking into account the absence of any known preexisting illness and the fact that the emergency laboratory tests were without findings, a hypothesis of the rare Fahr disease was supposed. The suspicion was confirmed with further, more specific tests, which showed the lack of any other conditions which could probably share the same radiological image. Differentiating between Fahr disease and Fahr syndrome. Fahr disease: Primarily autosomal dominant Symmetrical and bilateral intracranial calcifications The patient is healthy until the middle age Absence of biochemical abnormalities. Family history consistent with autosomal dominant Fahr syndrome :Earlier between 30 to 40 years old. Symmetrical and bilateral intracranial calcifications Endocrinopathies: Idiopathic hypoparathyroidism, secondary hypoparathyroidism, hyperparathyroidism, pseudohypoparathyroidism ,pseudopseudohypoparathyroidism, e.t.c The disease appears at any age There are abnormal laboratory or imaging findings. Conclusion: Fahr disease and Fahr syndrome are not the same illness, although this is not well known to the inexperienced doctors. As clinical radiologists, we have to inform our colleagues that a radiological image, along with the patient's history, probably implies a rare condition and not something more usual and prompt the investigation to the right route. In our case, a genetic test could be done earlier and reveal the problem, and thus avoiding unnecessary and specific tests which cost in time and are uncomfortable to the patient.

Keywords: fahr disease, fahr syndrome, CT, brain calcifications

Procedia PDF Downloads 46