Search results for: issue of cost
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9084

Search results for: issue of cost

1344 An Investigation of the Operation and Performance of London Cycle Hire Scheme

Authors: Amer Ali, Jessica Cecchinelli, Antonis Charalambous

Abstract:

Cycling is one of the most environmentally friendly, economic and healthy modes of transport but it needs more efficient cycle infrastructure and more effective safety measures. This paper represents an investigation into the performance and operation of the London Cycle Hire Scheme which started to operate in July 2010 using 5,000 cycles and 315 docking stations and currently has more than 10,000 cycles and over 700 docking stations across London which are available 24/7, 365 days a year. The study, which was conducted during the second half of 2014, consists of two parts; namely, the longitudinal review of the hire scheme between its introduction in 2010 and November 2014, and a field survey in November 2014 in the form of face-face interviews of the users of the cycle scheme to ascertain the existing limitations and difficulties experienced by those users and how it could be improved in terms of capability and safety. The study also includes a correlation between the usage of the cycle scheme and the corresponding weather conditions. The main findings are that on average the number of users (hiring frequency) had increased from just over two millions hires in 2010 to just less than ten millions in 2014. The field survey showed that 80% of the users are satisfied with the performance of the scheme whilst 50% of the users raised concern about the safety level of using the available cycle routes and infrastructure. The study also revealed that a high percentage of the cycle trips were relatively short (less than 30 minutes). Although the weather condition had some effect on cycling, the cost of using the cycle scheme and the main events in London had more effect on the number of cycle hires. The key conclusions are that despite the safety concern and the lack of infrastructure for continuous routes there was an encouraging number of people who opted for cycling as a clean, affordable, and healthy mode of transport. There is a need to expand the scheme by providing more cycles and docking stations and to support that by more well-designed and maintained cycle routes. More details about the development of London Cycle Hire Scheme during the last five years, its performance and the key issues revealed by the surveyed users will be reported in the full version of the paper.

Keywords: cycling mode of transport, london cycle hire scheme, safety, environmental and health benefits, user satisfaction

Procedia PDF Downloads 380
1343 Provide Adequate Protection to Avoid Secondary Victimization: Ensuring the Rights of the Child Victims in the Criminal Justice System

Authors: Muthukuda Arachchige Dona Shiroma Jeeva Shirajanie Niriella

Abstract:

The necessity of protection of the rights of victims of crime is a matter of concerns today. In the criminal justice system, child victims who are subjected to sexual abuse/violence are more vulnerable than the other crime victims. When they go to the police to lodge the complaint and until the end of the court proceedings, these victims are re-victimized in the criminal justice system. The rights of the suspects, accused and convicts are recognized and guaranteed by the constitution under fair trial norm, contemporary penal laws where crime is viewed as an offence against the State and existing criminal justice system in many jurisdictions including Sri Lanka. In this backdrop, a reasonable question arises as to whether the existing criminal justice system, especially which follow the adversarial mode of judicial trial protect the fair trial norm in the criminal justice process. Therefore, this paper intends to discuss the rights of the sexually abused child victims in the criminal justice system in order to restore imbalance between the rights of the wrongdoer and victim and suggest legal reforms to strengthen their rights in the criminal justice system which is essential to end secondary victimization. The paper considers Sri Lanka as a sample to discuss this issue. The paper looks at how the child victims are marginalized in the traditional adversarial model of the justice process, whether the contemporary penal laws adequately protect the right of these victims and whether the current laws set out the provisions to provide sufficient assistance and protection to them. The study further deals with the important principles adopted in international human rights law relating to the protection of the rights of the child victims in sexual offences cases. In this research paper, rights of the child victims in the investigation, trial and post-trial stages in the criminal justice process will be assessed. This research contains an extensive scrutiny of relevant international standards and local statutory provisions. Case law, books, journal articles, government publications such as commissions’ reports under this topic are rigorously reviewed as secondary resources. Further, randomly selected 25 child victims of sexual offences from the decided cases in last two years, police officers from 5 police divisions where the highest numbers of sexual offences were reported in last two years and the judicial officers both Magistrates and High Court Judges from the same judicial zones are interviewed. These data will be analyzed in order to find out the reasons for this specific sexual victimization, needs of these victims in various stages of the criminal justice system, relationship between victimization and offending and the difficulties and problems that these victims come across in criminal justice system. The author argues that the child victims are considerably neglected and their rights are not adequately protected in the adversarial model of the criminal justice process.

Keywords: child victims of sexual violence, criminal justice system, international standards, rights of child victims, Sri Lanka

Procedia PDF Downloads 360
1342 Potential Opportunity and Challenge of Developing Organic Rankine Cycle Geothermal Power Plant in China Based on an Energy-Economic Model

Authors: Jiachen Wang, Dongxu Ji

Abstract:

Geothermal power generation is a mature technology with zero carbon emission and stable power output, which could play a vital role as an optimum substitution of base load technology in China’s future decarbonization society. However, the development of geothermal power plants in China is stagnated for a decade due to the underestimation of geothermal energy and insufficient favoring policy. Lack of understanding of the potential value of base-load technology and environmental benefits is the critical reason for disappointed policy support. This paper proposed a different energy-economic model to uncover the potential benefit of developing a geothermal power plant in Puer, including the value of base-load power generation, and environmental and economic benefits. Optimization of the Organic Rankine Cycle (ORC) for maximum power output and minimum Levelized cost of electricity was first conducted. This process aimed at finding the optimum working fluid, turbine inlet pressure, pinch point temperature difference and superheat degrees. Then the optimal ORC model was sent to the energy-economic model to simulate the potential economic and environmental benefits. Impact of geothermal power plants based on the scenarios of implementing carbon trade market, the direct subsidy per electricity generation and nothing was tested. In addition, a requirement of geothermal reservoirs, including geothermal temperature and mass flow rate for a competitive power generation technology with other renewables, was listed. The result indicated that the ORC power plant has a significant economic and environmental benefit over other renewable power generation technologies when implementing carbon trading market and subsidy support. At the same time, developers must locate the geothermal reservoirs with minimum temperature and mass flow rate of 130 degrees and 50 m/s to guarantee a profitable project under nothing scenarios.

Keywords: geothermal power generation, optimization, energy model, thermodynamics

Procedia PDF Downloads 59
1341 The Rise of Blue Water Navy and its Implication for the Region

Authors: Riddhi Chopra

Abstract:

Alfred Thayer Mahan described the sea as a ‘great common,’ which would serve as a medium for communication, trade, and transport. The seas of Asia are witnessing an intriguing historical anomaly – rise of an indigenous maritime power against the backdrop of US domination over the region. As China transforms from an inward leaning economy to an outward-leaning economy, it has become increasingly dependent on the global sea; as a result, we witness an evolution in its maritime strategy from near seas defense to far seas deployment strategies. It is not only patrolling the international waters but has also built a network of civilian and military infrastructure across the disputed oceanic expanse. The paper analyses the reorientation of China from a naval power to a blue water navy in an era of extensive globalisation. The actions of the Chinese have created a zone of high alert amongst its neighbors such as Japan, Philippines, Vietnam and North Korea. These nations are trying to align themselves so as to counter China’s growing brinkmanship, but China has been pursuing claims through a carefully calibrated strategy in the region shunning any coercive measures taken by other forces. If China continues to expand its maritime boundaries, its neighbors – all smaller and weaker Asian nations would be limited to a narrow band of the sea along its coastlines. Hence it is essential for the US to intervene and support its allies to offset Chinese supremacy. The paper intends to provide a profound analysis over the disputes in South China Sea and East China Sea focusing on Philippines and Japan respectively. Moreover, the paper attempts to give an account of US involvement in the region and its alignment with its South Asian allies. The geographic dynamics is said the breed a national coalition dominating the strategic ambitions of China as well as the weak littoral states. China has conducted behind the scenes diplomacy trying to persuade its neighbors to support its position on the territorial disputes. These efforts have been successful in creating fault lines in ASEAN thereby undermining regional integrity to reach a consensus on the issue. Chinese diplomatic efforts have also forced the US to revisit its foreign policy and engage with players like Cambodia and Laos. The current scenario in the SCS points to a strong Chinese hold trying to outspace all others with no regards to International law. Chinese activities are in contrast with US principles like Freedom of Navigation thereby signaling US to take bold actions to prevent Chinese hegemony in the region. The paper ultimately seeks to explore the changing power dynamics among various claimants where a rival superpower like US can pursue the traditional policy of alliance formation play a decisive role in changing the status quo in the arena, consequently determining the future trajectory.

Keywords: China, East China Sea, South China Sea, USA

Procedia PDF Downloads 232
1340 Microalgae for Plant Biostimulants on Whey and Dairy Wastewaters

Authors: Sergejs Kolesovs, Pavels Semjonovs

Abstract:

Whey and dairy wastewaters if disposed in the environment without proper treatment, cause serious environmental risks contributing to overall and particular environmental pollution and climate change. Biological treatment of wastewater is considered to be most eco-friendly approach, as compared to the chemical treatment methods. Research shows, that dairy wastewater can potentially be remediated by use of microalgae thussignificantly reducing the content of carbohydrates, P, N, K and other pollutants. Moreover, it has been shown, that use of dairy wastewaters results in higher microalgae biomass production. In recent decades microalgal biomass has entailed a big interest for its potential applications in pharmaceuticals, biomedicine, health supplementation, cosmetics, animal feed, plant protection, bioremediation and biofuels. It was shown, that lipids productivity on whey and dairy wastewater is higher as compared with standard cultivation media and occurred without the necessity of inducing specific stress conditions such as N starvation. Moreover, microalgae biomass production as usually associated with high production costs may benefit from perspective of both reasons – enhanced microalgae biomass or target substances productivity on cheap growth substrate and effective management of whey and dairy wastewaters, which issignificant for decrease of total production costs in both processes. Obviously, it became especially important when large volume and low cost industrial microalgal biomass production is anticipated for further use in agriculture of crops as plant growth stimulants, biopesticides soil fertilisers or remediating solutions. Environmental load of dairy wastewaters can be significantly decreased when microalgae are grown in coculture with other microorganisms. This enhances the utilisation of lactose, which is main C source in whey and dairy wastewaters when it is not metabolised easily by most microalgal species chosen. Our study showsthat certain microalgae strains can be used in treatment of residual sugars containing industrial wastewaters and decrease of their concentration thus approving that further extensive research on dairy wastewaters pre-treatment optionsfor effective cultivation of microalgae, carbon uptake and metabolism, strain selection and choice of coculture candidates is needed for further optimisation of the process.

Keywords: microalgae, whey, dairy wastewaters, sustainability, plant biostimulants

Procedia PDF Downloads 85
1339 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 95
1338 Testing Depression in Awareness Space: A Proposal to Evaluate Whether a Psychotherapeutic Method Based on Spatial Cognition and Imagination Therapy Cures Moderate Depression

Authors: Lucas Derks, Christine Beenhakker, Michiel Brandt, Gert Arts, Ruud van Langeveld

Abstract:

Background: The method Depression in Awareness Space (DAS) is a psychotherapeutic intervention technique based on the principles of spatial cognition and imagination therapy with spatial components. The basic assumptions are: mental space is the primary organizing principle in the mind, and all psychological issues can be treated by first locating and by next relocating the conceptualizations involved. The most clinical experience was gathered over the last 20 years in the area of social issues (with the social panorama model). The latter work led to the conclusion that a mental object (image) gains emotional impact when it is placed more central, closer and higher in the visual field – and vice versa. Changing the locations of mental objects in space thus alters the (socio-) emotional meaning of the relationships. The experience of depression seems always associated with darkness. Psychologists tend to see the link between depression and darkness as a metaphor. However, clinical practice hints to the existence of more literal forms of darkness. Aims: The aim of the method Depression in Awareness Space is to reduce the distress of clients with depression in the clinical counseling practice, as a reliable alternative method of psychological therapy for the treatment of depression. The method Depression in Awareness Space aims at making dark areas smaller, lighter and more transparent in order to identify the problem or the cause of the depression which lies behind the darkness. It was hypothesized that the darkness is a subjective side-effect of the neurological process of repression. After reducing the dark clouds the real problem behind the depression becomes more visible, allowing the client to work on it and in that way reduce their feelings of depression. This makes repression of the issue obsolete. Results: Clients could easily get into their 'sadness' when asked to do so and finding the location of the dark zones proved pretty easy as well. In a recent pilot study with five participants with mild depressive symptoms (measured on two different scales and tested against an untreated control group with similar symptoms), the first results were also very promising. If the mental spatial approach to depression can be proven to be really effective, this would be very good news. The Society of Mental Space Psychology is now looking for sponsoring of an up scaled experiment. Conclusions: For spatial cognition and the research into spatial psychological phenomena, the discovery of dark areas can be a step forward. Beside out of pure scientific interest, it is great to know that this discovery has a clinical implication: when darkness can be connected to depression. Also, darkness seems to be more than metaphorical expression. Progress can be monitored over measurement tools that quantify the level of depressive symptoms and by reviewing the areas of darkness.

Keywords: depression, spatial cognition, spatial imagery, social panorama

Procedia PDF Downloads 161
1337 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group

Authors: Antonio Martín-Pardo

Abstract:

With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.

Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation

Procedia PDF Downloads 112
1336 Implementation of Quality Function Development to Incorporate Customer’s Value in the Conceptual Design Stage of a Construction Projects

Authors: Ayedh Alqahtani

Abstract:

Many construction firms in Saudi Arabia dedicated to building projects agree that the most important factor in the real estate market is the value that they can give to their customer. These firms understand the value of their client in different ways. Value can be defined as the size of the building project in relationship to the cost or the design quality of the materials utilized in finish work or any other features of building rooms such as the bathroom. Value can also be understood as something suitable for the money the client is investing for the new property. A quality tool is required to support companies to achieve a solution for the building project and to understand and manage the customer’s needs. Quality Function Development (QFD) method will be able to play this role since the main difference between QFD and other conventional quality management tools is QFD a valuable and very flexible tool for design and taking into the account the VOC. Currently, organizations and agencies are seeking suitable models able to deal better with uncertainty, and that is flexible and easy to use. The primary aim of this research project is to incorporate customer’s requirements in the conceptual design of construction projects. Towards this goal, QFD is selected due to its capability to integrate the design requirements to meet the customer’s needs. To develop QFD, this research focused upon the contribution of the different (significantly weighted) input factors that represent the main variables influencing QFD and subsequent analysis of the techniques used to measure them. First of all, this research will review the literature to determine the current practice of QFD in construction projects. Then, the researcher will review the literature to define the current customers of residential projects and gather information on customers’ requirements for the design of the residential building. After that, qualitative survey research will be conducted to rank customer’s needs and provide the views of stakeholder practitioners about how these needs can affect their satisfy. Moreover, a qualitative focus group with the members of the design team will be conducted to determine the improvements level and technical details for the design of residential buildings. Finally, the QFD will be developed to establish the degree of significance of the design’s solution.

Keywords: quality function development, construction projects, Saudi Arabia, quality tools

Procedia PDF Downloads 111
1335 Pyramid of Deradicalization: Causes and Possible Solutions

Authors: Ashir Ahmed

Abstract:

Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.

Keywords: deradicalization, framework, terrorism, violent extremism

Procedia PDF Downloads 256
1334 Candida antartica Lipase Assisted Enrichment of n-3 PUFA in Indian Sardine Oil

Authors: Prasanna Belur, P. R. Ashwini, Sampath Charanyaa, I. Regupathi

Abstract:

Indian oil sardine (Sardinella longiceps) are one of the richest and cheapest sources of n-3 polyunsaturated fatty acids (n-3 PUFA) such as Eicosapentaenoic acid (EPA) and Docosahexaenoic acid (DHA). The health benefits conferred by n-3 PUFA upon consumption, in the prevention and treatment of coronary, neuromuscular, immunological disorders and allergic conditions are well documented. Natural refined Indian Sardine oil generally contain about 25% (w/w) n-3 PUFA along with various unsaturated and saturated fatty acids in the form of mono, di, and triglycerides. Having high concentration of n-3 PUFA content in the glyceride form is most desirable for human consumption to avail maximum health benefits. Thus, enhancing the n-3 PUFA content while retaining it in the glyceride form with green technology is the need of the hour. In this study, refined Indian Sardine oil was subjected to selective hydrolysis by Candida antartica lipase to enhance n-3 PUFA content. The degree of hydrolysis and enhancement of n-3 PUFA content was estimated by determining acid value, Iodine value, EPA and DHA content (by Gas Chromatographic methods after derivitization) before and after hydrolysis. Various reaction parameters such as pH, temperature, enzyme load, lipid to aqueous phase volume ratio and incubation time were optimized by conducting trials with one parameter at a time approach. Incubating enzyme solution with refined sardine oil with a volume ratio of 1:1, at pH 7.0, for 60 minutes at 50 °C, with an enzyme load of 60 mg/ml was found to be optimum. After enzymatic treatment, the oil was subjected to refining to remove free fatty acids and moisture content using previously optimized refining technology. Enzymatic treatment at the optimal conditions resulted in 12.11 % enhancement in Degree of hydrolysis. Iodine number had increased by 9.7 % and n-3 PUFA content was enhanced by 112 % (w/w). Selective enhancement of n-3 PUFA glycerides, eliminating saturated and unsaturated fatty acids from the oil using enzyme is an interesting preposition as this technique is environment-friendly, cost effective and provide natural source of n-3 PUFA rich oil.

Keywords: Candida antartica, lipase, n-3 polyunsaturated fatty acids, sardine oil

Procedia PDF Downloads 216
1333 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy

Procedia PDF Downloads 107
1332 Ownership and Shareholder Schemes Effects on Airport Corporate Strategy in Europe

Authors: Dimitrios Dimitriou, Maria Sartzetaki

Abstract:

In the early days of the of civil aviation, airports are totally state-owned companies under the control of national authorities or regional governmental bodies. From that time the picture has totally changed and airports privatisation and airport business commercialisation are key success factors to stimulate air transport demand, generate revenues and attract investors, linked to reliable and resilience of air transport system. Nowadays, airport's corporate strategy deals with policies and actions, affecting essential the business plans, the financial targets and the economic footprint in a regional economy they serving. Therefore, exploring airport corporate strategy is essential to support the decision in business planning, management efficiency, sustainable development and investment attractiveness on one hand; and define policies towards traffic development, revenues generation, capacity expansion, cost efficiency and corporate social responsibility. This paper explores key outputs in airport corporate strategy for different ownership schemes. The airport corporations are grouped in three major schemes: (a) Public, in which the public airport operator acts as part of the government administration or as a corporised public operator; (b) Mixed scheme, in which the majority of the shares and the corporate strategy is driven by the private or the public sector; and (c) Private, in which the airport strategy is driven by the key aspects of globalisation and liberalisation of the aviation sector. By a systemic approach, the key drivers in corporate strategy for modern airport business structures are defined. Key objectives are to define the key strategic opportunities and challenges and assess the corporate goals and risks towards sustainable business development for each scheme. The analysis based on an extensive cross-sectional dataset for a sample of busy European airports providing results on corporate strategy key priorities, risks and business models. The conventional wisdom is to highlight key messages to authorities, institutes and professionals on airport corporate strategy trends and directions.

Keywords: airport corporate strategy, airport ownership, airports business models, corporate risks

Procedia PDF Downloads 297
1331 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study

Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin

Abstract:

In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.

Keywords: performance, performance evolution system, productivity, Edirne region

Procedia PDF Downloads 297
1330 Study of Porous Metallic Support for Intermediate-Temperature Solid Oxide Fuel Cells

Authors: S. Belakry, D. Fasquelle, A. Rolle, E. Capoen, R. N. Vannier, J. C. Carru

Abstract:

Solid oxide fuel cells (SOFCs) are promising devices for energy conversion due to their high electrical efficiency and eco-friendly behavior. Their performance is not only influenced by the microstructural and electrical properties of the electrodes and electrolyte but also depends on the interactions at the interfaces. Nowadays, commercial SOFCs are electrically efficient at high operating temperatures, typically between 800 and 1000 °C, which restricts their real-life applications. The present work deals with the objectives to reduce the operating temperature and to develop cost-effective intermediate-temperature solid oxide fuel cells (IT-SOFCs). This work focuses on the development of metal-supported solid oxide fuel cells (MS-IT-SOFCs) that would provide cheaper SOFC cells with increased lifetime and reduced operating temperature. In the framework, the local company TIBTECH brings its skills for the manufacturing of porous metal supports. This part of the work focuses on the physical, chemical, and electrical characterizations of porous metallic supports (stainless steel 316 L and FeCrAl alloy) under different exposure conditions of temperature and atmosphere by studying oxidation, mechanical resistance, and electrical conductivity of the materials. Within the target operating temperature (i.e., 500 to 700 ° C), the stainless steel 316 L and FeCrAl alloy slightly oxidize in the air and H2, but don’t deform; whereas under Ar atmosphere, they oxidize more than with previously mentioned atmospheres. Above 700 °C under air and Ar, the two metallic supports undergo high oxidation. From 500 to 700 °C, the resistivity of FeCrAl increases by 55%. But nevertheless, the FeCrAl resistivity increases more slowly than the stainless steel 316L resistivity. This study allows us to verify the compatibility of electrodes and electrolyte materials with metallic support at the operating requirements of the IT-SOFC cell. The characterizations made in this context will also allow us to choose the most suitable fabrication process for all functional layers in order to limit the oxidation of the metallic supports.

Keywords: stainless steel 316L, FeCrAl alloy, solid oxide fuel cells, porous metallic support

Procedia PDF Downloads 85
1329 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 216
1328 Effect of Compaction Method on the Mechanical and Anisotropic Properties of Asphalt Mixtures

Authors: Mai Sirhan, Arieh Sidess

Abstract:

Asphaltic mixture is a heterogeneous material composed of three main components: aggregates; bitumen and air voids. The professional experience and scientific literature categorize asphaltic mixture as a viscoelastic material, whose behavior is determined by temperature and loading rate. Properties characterization of the asphaltic mixture used under the service conditions is done by compacting and testing cylindric asphalt samples in the laboratory. These samples must resemble in a high degree internal structure of the mixture achieved in service, and the mechanical characteristics of the compacted asphalt layer in the pavement. The laboratory samples are usually compacted in temperatures between 140 and 160 degrees Celsius. In this temperature range, the asphalt has a low degree of strength. The laboratory samples are compacted using the dynamic or vibrational compaction methods. In the compaction process, the aggregates tend to align themselves in certain directions that lead to anisotropic behavior of the asphaltic mixture. This issue has been studied in the Strategic Highway Research Program (SHRP) research, that recommended using the gyratory compactor based on the assumption that this method is the best in mimicking the compaction in the service. In Israel, the Netivei Israel company is considering adopting the Gyratory Method as a replacement for the Marshall method used today. Therefore, the compatibility of the Gyratory Method for the use with Israeli asphaltic mixtures should be investigated. In this research, we aimed to examine the impact of the compaction method used on the mechanical characteristics of the asphaltic mixtures and to evaluate the degree of anisotropy in relation to the compaction method. In order to carry out this research, samples have been compacted in the vibratory and gyratory compactors. These samples were cylindrically cored both vertically (compaction wise) and horizontally (perpendicular to compaction direction). These models were tested under dynamic modulus and permanent deformation tests. The comparable results of the tests proved that: (1) specimens compacted by the vibratory compactor had higher dynamic modulus values than the specimens compacted by the gyratory compactor (2) both vibratory and gyratory compacted specimens had anisotropic behavior, especially in high temperatures. Also, the degree of anisotropy is higher in specimens compacted by the gyratory method. (3) Specimens compacted by the vibratory method that were cored vertically had the highest resistance to rutting. On the other hand, specimens compacted by the vibratory method that were cored horizontally had the lowest resistance to rutting. Additionally (4) these differences between the different types of specimens rise mainly due to the different internal arrangement of aggregates resulting from the compaction method. (5) Based on the initial prediction of the performance of the flexible pavement containing an asphalt layer having characteristics based on the results achieved in this research. It can be concluded that there is a significant impact of the compaction method and the degree of anisotropy on the strains that develop in the pavement, and the resistance of the pavement to fatigue and rutting defects.

Keywords: anisotropy, asphalt compaction, dynamic modulus, gyratory compactor, mechanical properties, permanent deformation, vibratory compactor

Procedia PDF Downloads 112
1327 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids

Authors: Ayalew Yimam Ali

Abstract:

The Y-shaped microchannel system is used to mix up low or high viscosities of different fluids, and the laminar flow with high-viscous water-glycerol fluids makes the mixing at the entrance Y-junction region a challenging issue. Acoustic streaming (AS) is time-average, a steady second-order flow phenomenon that could produce rolling motion in the microchannel by oscillating low-frequency range acoustic transducer by inducing acoustic wave in the flow field is the promising strategy to enhance diffusion mass transfer and mixing performance in laminar flow phenomena. In this study, the 3D trapezoidal Structure has been manufactured with advanced CNC machine cutting tools to produce the molds of trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm spine sharp-edge tip depth from PMMA glass (Polymethylmethacrylate) and the microchannel has been fabricated using PDMS (Polydimethylsiloxane) which could be grown-up longitudinally in Y-junction microchannel mixing region top surface to visualized 3D rolling steady acoustic streaming and mixing performance evaluation using high-viscous miscible fluids. The 3D acoustic streaming flow patterns and mixing enhancement were investigated using the micro-particle image velocimetry (μPIV) technique with different spine depth lengths, channel widths, high volume flow rates, oscillation frequencies, and amplitude. The velocity and vorticity flow fields show that a pair of 3D counter-rotating streaming vortices were created around the trapezoidal spine structure and observing high vorticity maps up to 8 times more than the case without acoustic streaming in Y-junction with the high-viscosity water-glycerol mixture fluids. The mixing experiments were performed by using fluorescent green dye solution with de-ionized water on one inlet side, de-ionized water-glycerol with different mass-weight percentage ratios on the other inlet side of the Y-channel and evaluated its performance with the degree of mixing at different amplitudes, flow rates, frequencies, and spine sharp-tip edge angles using the grayscale value of pixel intensity with MATLAB Software. The degree of mixing (M) characterized was found to significantly improved to 0.96.8% with acoustic streaming from 67.42% without acoustic streaming, in the case of 0.0986 μl/min flow rate, 12kHz frequency and 40V oscillation amplitude at y = 2.26 mm. The results suggested the creation of a new 3D steady streaming rolling motion with a high volume flow rate around the entrance junction mixing region, which promotes the mixing of two similar high-viscosity fluids inside the microchannel, which is unable to mix by the laminar flow with low viscous conditions.

Keywords: nano fabrication, 3D acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement

Procedia PDF Downloads 10
1326 Service Blueprint for Improving Clinical Guideline Adherence via Mobile Health Technology

Authors: Y. O’Connor, C. Heavin, S. O’ Connor, J. Gallagher, J. Wu, J. O’Donoghue

Abstract:

Background: To improve the delivery of paediatric healthcare in resource-poor settings, Community Health Workers (CHW) have been provided with a paper-based set of protocols known as Community Case Management (CCM). Yet research has shown that CHW adherence to CCM guidelines is poor, ultimately impacting health service delivery. Digitising the CCM guidelines via mobile technology is argued in extant literature to improve CHW adherence. However, little research exist which outlines how (a) this process can be digitised and (b) adherence could be improved as a result. Aim: To explore how an electronic mobile version of CCM (eCCM) can overcome issues associated with the paper-based CCM protocol (poor adherence to guidelines) vis-à-vis service blueprinting. This service blueprint will outline how (a) the CCM process can be digitised using mobile Clinical Decision Support Systems software to support clinical decision-making and (b) adherence can be improved as a result. Method: Development of a single service blueprint for a standalone application which visually depicts the service processes (eCCM) when supporting the CHWs, using an application known as Supporting LIFE (Low cost Intervention For disEase control) as an exemplar. Results: A service blueprint is developed which illustrates how the eCCM solution can be utilised by CHWs to assist with the delivery of healthcare services to children. Leveraging smartphone technologies can (a) provide CHWs with just-in-time data to assist with their decision making at the point-of-care and (b) improve CHW adherence to CCM guidelines. Conclusions: The development of the eCCM opens up opportunities for the CHWs to leverage the inherent benefit of mobile devices to assist them with health service delivery in rural settings. To ensure that benefits are achieved, it is imperative to comprehend the functionality and form of the eCCM service process. By creating such a service blueprint for an eCCM approach, CHWs are provided with a clear picture regarding the role of the eCCM solution, often resulting in buy-in from the end-users.

Keywords: adherence, community health workers, developing countries, mobile clinical decision support systems, CDSS, service blueprint

Procedia PDF Downloads 407
1325 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 196
1324 Effectiveness with Respect to Time-To-Market and the Impacts of Late-Stage Design Changes in Rapid Development Life Cycles

Authors: Parth Shah

Abstract:

The author examines the recent trend where business organizations are significantly reducing their developmental cycle times to stay competitive in today’s global marketspace. The author proposes a rapid systems engineering framework to address late design changes and allow for flexibility (i.e. to react to unexpected or late changes and its impacts) during the product development cycle using a Systems Engineering approach. A System Engineering approach is crucial in today’s product development to deliver complex products into the marketplace. Design changes can occur due to shortened timelines and also based on initial consumer feedback once a product or service is in the marketplace. The ability to react to change and address customer expectations in a responsive and cost-efficient manner is crucial for any organization to succeed. Past literature, research, and methods such as concurrent development, simultaneous engineering, knowledge management, component sharing, rapid product integration, tailored systems engineering processes, and studies on reducing product development cycles all suggest a research gap exist in specifically addressing late design changes due to the shortening of life cycle environments in increasingly competitive markets. The author’s research suggests that 1) product development cycles time scales are now measured in months instead of years, 2) more and more products have interdepended systems and environments that are fast-paced and resource critical, 3) product obsolesce is higher and more organizations are releasing products and services frequently, and 4) increasingly competitive markets are leading to customization based on consumer feedback. The author will quantify effectiveness with respect to success factors such as time-to-market, return-of-investment, life cycle time and flexibility in late design changes by complexity of product or service, number of late changes and ability to react and reduce late design changes.

Keywords: product development, rapid systems engineering, scalability, systems engineering, systems integration, systems life cycle

Procedia PDF Downloads 197
1323 Comparison of the Thermal Behavior of Different Crystal Forms of Manganese(II) Oxalate

Authors: B. Donkova, M. Nedyalkova, D. Mehandjiev

Abstract:

Sparingly soluble manganese oxalate is an appropriate precursor for the preparation of nanosized manganese oxides, which have a wide range of technological application. During the precipitation of manganese oxalate, three crystal forms could be obtained – α-MnC₂O₄.2H₂O (SG C2/c), γ-MnC₂O₄.2H₂O (SG P212121) and orthorhombic MnC₂O₄.3H₂O (SG Pcca). The thermolysis of α-MnC₂O₄.2H₂O has been extensively studied during the years, while the literature data for the other two forms has been quite scarce. The aim of the present communication is to highlight the influence of the initial crystal structure on the decomposition mechanism of these three forms, their magnetic properties, the structure of the anhydrous oxalates, as well as the nature of the obtained oxides. For the characterization of the samples XRD, SEM, DTA, TG, DSC, nitrogen adsorption, and in situ magnetic measurements were used. The dehydration proceeds in one step with α-MnC₂O₄.2H2O and γ-MnC₂O₄.2H₂O, and in three steps with MnC₂O₄.3H2O. The values of dehydration enthalpy are 97, 149 and 132 kJ/mol, respectively, and the last two were reported for the first time, best to our knowledge. The magnetic measurements show that at room temperature all samples are antiferomagnetic, however during the dehydration of α-MnC₂O₄.2H₂O the exchange interaction is preserved, for MnC₂O₄.3H₂O it changes to ferromagnetic above 35°C, and for γ-MnC₂O₄.2H₂O it changes twice from antiferomagnetic to ferromagnetic above 70°C. The experimental results for magnetic properties are in accordance with the computational results obtained with Wien2k code. The difference in the initial crystal structure of the forms used determines different changes in the specific surface area during dehydration and different extent of Mn(II) oxidation during decomposition in the air; both being highest at α-MnC₂O₄.2H₂O. The isothermal decomposition of the different oxalate forms shows that the type and physicochemical properties of the oxides, obtained at the same annealing temperature depend on the precursor used. Based on the results from the non-isothermal and isothermal experiments, and from different methods used for characterization of the sample, a comparison of the nature, mechanism and peculiarities of the thermolysis of the different crystal forms of manganese oxalate was made, which clearly reveals the influence of the initial crystal structure. Acknowledgment: 'Science and Education for Smart Growth', project BG05M2OP001-2.009-0028, COST Action MP1306 'Modern Tools for Spectroscopy on Advanced Materials', and project DCOST-01/18 (Bulgarian Science Fund).

Keywords: crystal structure, magnetic properties, manganese oxalate, thermal behavior

Procedia PDF Downloads 164
1322 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces

Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur

Abstract:

In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.

Keywords: aerodynamic, bi-dimensional, vegetation, synergistic

Procedia PDF Downloads 262
1321 Electrochemical and Microstructure Properties of Chromium-Graphene and SnZn-Graphene Oxide Composite Coatings

Authors: Rekha M. Y., Punith Kumar, Anshul Kamboj, Chandan Srivastava

Abstract:

Coatings plays an important role in providing protection for a substrate and in improving the surface quality. Graphene/graphene oxide (GO) using in coating systems provides an environmental friendly solution towards protection against corrosion. Issues such as, lack of scale, high cost, low quality limits the practical application of graphene/GO as corrosion resistant coating material. One other way to employ these materials for corrosion protection is to incorporate them into coatings that are conventionally used for corrosion protection. Due to the extraordinary properties of graphene/GO, it has been demonstrated that the coatings containing graphene/GO are more corrosion resistant than pure metal/alloy coatings. In the present work, Cr-graphene and SnZn-GO composite coatings were investigated in enhancing the corrosion resistant property when compared to pure Cr coating and pure SnZn coating respectively. All the coatings were electrodeposited over mild-steel substrate. Graphene and GO were synthesized by electrochemical exfoliation method and modified Hummers’ method respectively. In Cr coatings, the microstructural study revealed that the addition of formic acid in the coatings reduced the number of cracks in the coatings. Further addition of graphene in Cr coating enhanced the Cr coating’s morphology. Chemically synthesized ZnO nanoparticles were also embedded in the as-deposited Cr and Cr-graphene coatings to enhance the adhesion of the coating, to improve the surface finish and to increase the corrosion resistant property of the coatings. Diffraction analysis revealed that the addition of graphene also altered the texture of the Cr coatings. In SnZn alloy coatings, the morphological and topographical characterization revealed that the relative smoothness and compactness of the coatings increased with increase in the addition of GO in the coatings. The microstructural investigation revealed large-scale segregation of Zn-rich and Sn-rich phases in the pure SnZn coating. However, in SnZn-GO composite coating the uniform distribution of Zn phase in the Sn-rich matrix was observed. This distribution caused the early and uniform formation of ZnO, which is the corrosion product, yielding better corrosion resistance for the SnZn-GO composite coatings as compared to pure SnZn coating. A significant improvement in corrosion resistance in terms of reduction in corrosion current and corrosion rate and increase in the polarization resistance was observed in Cr coating containing graphene and in SnZn coatings containing GO.

Keywords: coatings, corrosion, electrodeposition, graphene, graphene-oxide

Procedia PDF Downloads 166
1320 Analysis of Waiting Time and Drivers Fatigue at Manual Toll Plaza and Suggestion of an Automated Toll Tax Collection System

Authors: Muhammad Dawood Idrees, Maria Hafeez, Arsalan Ansari

Abstract:

Toll tax collection is the earliest method of tax collection and revenue generation. This revenue is utilized for the development of roads networks, maintenance, and connecting to roads and highways across the country. Pakistan is one of the biggest countries, covers a wide area of land, roads networks, and motorways are important source of connecting cities. Every day millions of people use motorways, and they have to stop at toll plazas to pay toll tax as majority of toll plazas are manually collecting toll tax. The purpose of this study is to calculate the waiting time of vehicles at Karachi Hyderabad (M-9) motorway. As Karachi is the biggest city of Pakistan and hundreds of thousands of people use this route to approach other cities. Currently, toll tax collection is manual system which is a major cause for long time waiting at toll plaza. This study calculates the waiting time of vehicles, fuel consumed in waiting time, manpower employed at toll plaza as all process is manual, and it also leads to mental and physical fatigue of driver. All wastages of sources are also calculated, and a most feasible automatic toll tax collection system is proposed which is not only beneficial to reduce waiting time but also beneficial in reduction of fuel, reduction of manpower employed, and reduction in physical and mental fatigue. A cost comparison in terms of wastages is also shown between manual and automatic toll tax collection system (E-Z Pass). Results of this study reveal that, if automatic tool collection system is implemented at Karachi to Hyderabad motorway (M-9), there will be a significance reduction in waiting time of vehicles, which leads to reduction of fuel consumption, environmental pollution, mental and physical fatigue of driver. All these reductions are also calculated in terms of money (Pakistani rupees) and it is obtained that millions of rupees can be saved by using automatic tool collection system which will lead to improve the economy of country.

Keywords: toll tax collection, waiting time, wastages, driver fatigue

Procedia PDF Downloads 138
1319 Rethinking The Residential Paradigm: Regenerative Design and the Contemporary Housing Industry

Authors: Gabriela Lucas Sanchez

Abstract:

The contemporary housing industry is dominated by tract houses, which prioritize uniformity and cost-efficiency over environmental and ecological considerations. However, as the world faces the growing challenges of climate change and resource depletion, there is an urgent need to rethink the residential paradigm. This essay explores how regenerative practices can be integrated into standard residential designs to create a shift that reduces the environmental impact of housing and actively contributes to ecological health. Passive sustainable practices, such as passive solar design, natural ventilation, and the use of energy-efficient materials, aim to maximize resource use efficiency, minimize waste, and create healthy living environments. Regenerative practices, on the other hand, go beyond sustainability to work in harmony with natural systems, actively restoring and enriching the environment. Integrating these two approaches can redefine the residential paradigm, creating homes that reduce harm and positively impact the local ecosystem. The essay begins by exploring the principles and benefits of passive sustainable practices, discussing how they can reduce energy consumption and improve indoor environmental quality in standardized housing. Passive sustainability minimizes energy consumption through strategic design choices, such as optimizing building orientation, utilizing natural ventilation, and incorporating high-performance insulation and glazing. However, while sustainability efforts have been important steps in the right direction, a more holistic, regenerative approach is needed to address the root causes of environmental degradation. Regenerative development and design seek to go beyond simply reducing negative impacts, instead aiming to create built environments that actively contribute to restoring and enhancing natural systems. This shift in perspective is critical, as it recognizes the interdependence between human settlements and the natural world and the potential for buildings to serve as catalysts for positive change.

Keywords: passive sustainability, regenerative architecture, residential architecture, community

Procedia PDF Downloads 16
1318 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 450
1317 An Association Model to Correlate the Experimentally Determined Mixture Solubilities of Methyl 10-Undecenoate with Methyl Ricinoleate in Supercritical Carbon Dioxide

Authors: V. Mani Rathnam, Giridhar Madras

Abstract:

Fossil fuels are depleting rapidly as the demand for energy, and its allied chemicals are continuously increasing in the modern world. Therefore, sustainable renewable energy sources based on non-edible oils are being explored as a viable option as they do not compete with the food commodities. Oils such as castor oil are rich in fatty acids and thus can be used for the synthesis of biodiesel, bio-lubricants, and many other fine industrial chemicals. There are several processes available for the synthesis of different chemicals obtained from the castor oil. One such process is the transesterification of castor oil, which results in a mixture of fatty acid methyl esters. The main products in the above reaction are methyl ricinoleate and methyl 10-undecenoate. To separate these compounds, supercritical carbon dioxide (SCCO₂) was used as a green solvent. SCCO₂ was chosen as a solvent due to its easy availability, non-toxic, non-flammable, and low cost. In order to design any separation process, the preliminary requirement is the solubility or phase equilibrium data. Therefore, the solubility of a mixture of methyl ricinoleate with methyl 10-undecenoate in SCCO₂ was determined in the present study. The temperature and pressure range selected for the investigation were T = 313 K to 333 K and P = 10 MPa to 18 MPa. It was observed that the solubility (mol·mol⁻¹) of methyl 10-undecenoate varied from 2.44 x 10⁻³ to 8.42 x 10⁻³ whereas it varied from 0.203 x 10⁻³ to 6.28 x 10⁻³ for methyl ricinoleate within the chosen operating conditions. These solubilities followed a retrograde behavior (characterized by the decrease in the solubility values with the increase in temperature) throughout the range of investigated operating conditions. An association theory model, coupled with regular solution theory for activity coefficients, was developed in the present study. The deviation from the experimental data using this model can be quantified using the average absolute relative deviation (AARD). The AARD% for the present compounds is 4.69 and 8.08 for methyl 10-undecenoate and methyl ricinoleate, respectively in a mixture of methyl ricinoleate and methyl 10-undecenoate. The maximum solubility enhancement of 32% was observed for the methyl ricinoleate in a mixture of methyl ricinoleate and methyl 10-undecenoate. The highest selectivity of SCCO₂ was observed to be 12 for methyl 10-undecenoate in a mixture of methyl ricinoleate and methyl 10-undecenoate.

Keywords: association theory, liquid mixtures, solubilities, supercritical carbon dioxide

Procedia PDF Downloads 125
1316 The Selectivities of Pharmaceutical Spending Containment: Social Profit, Incentivization Games and State Power

Authors: Ben Main Piotr Ozieranski

Abstract:

State government spending on pharmaceuticals stands at 1 trillion USD globally, promoting criticism of the pharmaceutical industry's monetization of drug efficacy, product cost overvaluation, and health injustice. This paper elucidates the mechanisms behind a state-institutional response to this problem through the sociological lens of the strategic relational approach to state power. To do so, 30 expert interviews, legal and policy documents are drawn on to explain how state elites in New Zealand have successfully contested a 30-year “pharmaceutical spending containment policy”. Proceeding from Jessop's notion of strategic “selectivity”, encompassing analyses of the enabling features of state actors' ability to harness state structures, a theoretical explanation is advanced. First, a strategic context is described that consists of dynamics around pharmaceutical dealmaking between the state bureaucracy, pharmaceutical pricing strategies (and their effects), and the industry. Centrally, the pricing strategy of "bundling" -deals for packages of drugs that combine older and newer patented products- reflect how state managers have instigated an “incentivization game” that is played by state and industry actors, including HTA professionals, over pharmaceutical products (both current and in development). Second, a protective context is described that is comprised of successive legislative-judicial responses to the strategic context and characterized by the regulation and the societalisation of commercial law. Third, within the policy, the achievement of increased pharmaceutical coverage (pharmaceutical “mix”) alongside contained spending is conceptualized as a state defence of a "social profit". As such, in contrast to scholarly expectations that political and economic cultures of neo-liberalism drive pharmaceutical policy-making processes, New Zealand's state elites' approach is shown to be antipathetic to neo-liberals within an overall capitalist economy. The paper contributes an analysis of state pricing strategies and how they are embedded in state regulatory structures. Additionally, through an analysis of the interconnections of state power and pharmaceutical value Abrahams's neo-liberal corporate bias model for pharmaceutical policy analysis is problematised.

Keywords: pharmaceutical governance, pharmaceutical bureaucracy, pricing strategies, state power, value theory

Procedia PDF Downloads 62
1315 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 151