Search results for: transaction costs economics
865 Price Compensation Mechanism with Unmet Demand for Public-Private Partnership Projects
Abstract:
Public-private partnership (PPP), as an innovative way to provide infrastructures by the private sector, is being widely used throughout the world. Compared with the traditional mode, PPP emerges largely for merits of relieving public budget constraint and improving infrastructure supply efficiency by involving private funds. However, PPP projects are characterized by large scale, high investment, long payback period, and long concession period. These characteristics make PPP projects full of risks. One of the most important risks faced by the private sector is demand risk because many factors affect the real demand. If the real demand is far lower than the forecasting demand, the private sector will be got into big trouble because operating revenue is the main means for the private sector to recoup the investment and obtain profit. Therefore, it is important to study how the government compensates the private sector when the demand risk occurs in order to achieve Pareto-improvement. This research focuses on price compensation mechanism, an ex-post compensation mechanism, and analyzes, by mathematical modeling, the impact of price compensation mechanism on payoff of the private sector and consumer surplus for PPP toll road projects. This research first investigates whether or not price compensation mechanisms can obtain Pareto-improvement and, if so, then explores boundary conditions for this mechanism. The research results show that price compensation mechanism can realize Pareto-improvement under certain conditions. Especially, to make the price compensation mechanism accomplish Pareto-improvement, renegotiation costs of the government and the private sector should be lower than a certain threshold which is determined by marginal operating cost and distortionary cost of the tax. In addition, the compensation percentage should match with the price cut of the private investor when demand drops. This research aims to provide theoretical support for the government when determining compensation scope under the price compensation mechanism. Moreover, some policy implications can also be drawn from the analysis for better risk-sharing and sustainability of PPP projects.Keywords: infrastructure, price compensation mechanism, public-private partnership, renegotiation
Procedia PDF Downloads 179864 Autophagy Acceleration and Self-Healing by the Revolution against Frequent Eating, High Glycemic and Unabsorbable Substances as One Meal a Day Plan
Authors: Reihane Mehrparvar
Abstract:
Human age could exceed further by altering gene expression through food intaking, although as a consequence of recent century eating patterns, human life-span getting shorter by emerging irregulating in autophagy mechanism, insulin, leptin, gut microbiota which are important etiological factors of type-2 diabetes, obesity, infertility, cancer, metabolic and autoimmune diseases. However, restricted calorie intake and vigorous exercise might be beneficial for losing weight and metabolic regulation in a short period but could not be implementable in the long term as a way of life. Therefore, the lack of a dietary program that is compatible with the genes of the body is essential. Sweet and high-glycemic-index (HGI) foods were associated with type-2 diabetes and cancer morbidity. The neuropsychological perspective characterizes the inclination of sweet and HGI-food consumption as addictive behavior; hence this process engages preference of gut microbiota, neural node, and dopaminergic functions. Moreover, meal composition is not the only factor that affects body hemostasis. In this narrative review, it is believed to attempt to investigate how the body responded to different food intakes and represent an accurate model based on current evidence. Eating frequently and ingesting unassimilable protein and carbohydrates may not be compatible with human genes and could cause impairments in the self-renovation mechanism. This trajectory indicates our body is more adapted to starvation and eating animal meat and marrow. Here has been recommended a model that takes into account three important factors: frequent eating, meal composition, and circadian rhythm, which may offer a promising intervention for obesity, inflammation, cardiovascular, autoimmune disorder, type-2 diabetes, insulin resistance, infertility, and cancer through intensifying autophagy-mechanism and eliminate medical costs.Keywords: metabolic disease, anti-aging, type-2 diabetes, autophagy
Procedia PDF Downloads 81863 Preliminary Study of the Cost-Effectiveness of Green Walls: Analyzing Cases from the Perspective of Life Cycle
Authors: Jyun-Huei Huang, Ting-I Lee
Abstract:
Urban heat island effect is derived from the reduction of vegetative cover by urban development. Because plants can improve air quality and microclimate, green walls have been applied as a sustainable design approach to cool building temperature. By using plants to green vertical surfaces, they decrease room temperature and, as a result, decrease the energy use for air conditioning. Based on their structures, green walls can be divided into two categories, green façades and living walls. A green façade uses the climbing ability of a plant itself, while a living wall assembles planter modules. The latter one is widely adopted in public space, as it is time-effective and less limited. Although a living wall saves energy spent on cooling, it is not necessarily cost-effective from the perspective of a lifecycle analysis. The Italian study shows that the overall benefit of a living wall is only greater than its costs after 47 years of its establishment. In Taiwan, urban greening policies encourage establishment of green walls by referring to their benefits of energy saving while neglecting their low performance on cost-effectiveness. Thus, this research aims at understanding the perception of appliers and consumers on the cost-effectiveness of their living wall products from the lifecycle viewpoint. It adopts semi-structured interviews and field observations on the maintenance of the products. By comparing the two results, it generates insights for sustainable urban greening policies. The preliminary finding shows that stakeholders do not have a holistic sense of lifecycle or cost-effectiveness. Most importantly, a living wall well maintained is often with high input due to the availability of its maintenance budget, and thus less sustainable. In conclusion, without a comprehensive sense of cost-effectiveness throughout a product’s lifecycle, it is very difficult for suppliers and consumers to maintain a living wall system while achieve sustainability.Keywords: case study, maintenance, post-occupancy evaluation, vertical greening
Procedia PDF Downloads 265862 Factors Influencing the Use of Mobile Phone by Smallholder Farmers in Vegetable Marketing in Fogera District
Authors: Molla Tadesse Lakew
Abstract:
This study was intended to identify the factors influencing the use of mobile phones in vegetable marketing in Fogera district. The use of mobile phones in vegetable marketing and factors influencing mobile phone use were specific objectives of the study. Three kebeles from the Fogera district were selected purposively based on their vegetable production potential. A simple random sampling technique (lottery method) was used to select 153 vegetable producer farmers. Interview schedule and key informants interviews were used to collect primary data. For analyzing the data, descriptive statistics like frequency and percentage, two independent t-tests, and chi-square were used. Furthermore, econometric analysis (binary logistic model) was used to assess the factors influencing mobile phone use for vegetable market information. Contingency coefficient and variance inflation factor were used to check multicollinearity problems between the independent variables. Of 153 respondents, 82 (61.72%) were mobile phone users, while 71 (38.28 %) were mobile phone nonusers. Moreover, the main use of mobile phones in vegetable marketing includes communicating at a distance to save time and minimizing transport costs, getting vegetable marketing price information, identifying markets and buyers to sell the vegetable, deciding when to sell the vegetable, negotiating with buyers for better vegetable prices and for searching of the fast market to avoid from losing of product through perishing. The model result indicated that the level of education, size of land, income, access to credit, and age were significant variables affecting the use of mobile phones in vegetable marketing. It could be recommended to encourage adult education or give training for farmers on how to operate mobile phones and create awareness for the elderly rural farmers as they are able to use the mobile phone for their vegetable marketing. Moreover, farmers should be aware that mobile phones are very important for those who own very small land to get maximum returns from their production. Lastly, providing access to credit and improving and diversifying income sources for the farmers to have mobile phones were recommended to improve the livelihood of farmers.Keywords: mobile phone, farmers, vegetable marketing, Fogera District
Procedia PDF Downloads 73861 Effect of Roasting Treatment on Milling Quality, Physicochemical, and Bioactive Compounds of Dough Stage Rice Grains
Authors: Chularat Leewuttanakul, Khanitta Ruttarattanamongkol, Sasivimon Chittrakorn
Abstract:
Rice during grain development stage is a rich source of many bioactive compounds. Dough stage rice contains high amounts of photochemical and can be used for rice milling industries. However, rice grain at dough stage had low milling quality due to high moisture content. Thermal processing can be applied to rice grain for improving milled rice yield. This experiment was conducted to study the chemical and physic properties of dough stage rice grain after roasting treatment. Rice were roasted with two different methods including traditional pan roasting at 140 °C for 60 minutes and using the electrical roasting machine at 140 °C for 30, 40, and 50 minutes. The chemical, physical properties, and bioactive compounds of brown rice and milled rice were evaluated. The result of this experiment showed that moisture content of brown and milled rice was less than 10 % and amylose contents were in the range of 26-28 %. Rice grains roasting for 30 min using electrical roasting machine had high head rice yield and length and breadth of grain after milling were close to traditional pan roasting (p > 0.05). The lightness (L*) of rice did not affect by roasting treatment (p > 0.05) and the a* indicated the yellowness of milled rice was lower than brown rice. The bioactive compounds of brown and milled rice significantly decreased with increasing of drying time. Brown rice roasted for 30 minutes had the highest of total phenolic content, antioxidant activity, α-tocopherol, and ɤ-oryzanol content. Volume expansion and elongation of cooked rice decreased as roasting time increased and quality of cooked rice roasted for 30 min was comparable to traditional pan roasting. Hardness of cooked rice as measured by texture analyzer increased with increasing roasting time. The results indicated that rice grains at dough stage, containing a high amount of bioactive compounds, have a great potential for rice milling industries and the electrical roasting machine can be used as an alternative to pan roasting which decreases processing time and labor costs.Keywords: bioactive compounds, cooked rice, dough stage rice grain, grain development, roasting
Procedia PDF Downloads 163860 Producing Carbon Nanoparticles from Agricultural and Municipal Wastes
Authors: Kanik Sharma
Abstract:
In the year of 2011, the global production of carbon nano-materials (CNMs) was around 3,500 tons, and it is projected to expand at a compound annual growth rate of 30.6%. Expanding markets for applications of CNMs, such as carbon nano-tubes (CNTs) and carbon nano-fibers (CNFs), place ever-increasing demands on lowering their production costs. Current technologies for CNM generation require intensive premium feedstock consumption and employ costly catalysts; they also require input of external energy. Industrial-scale CNM production is conventionally achieved through chemical vapor deposition (CVD) methods which consume a variety of expensive premium chemical feedstocks such as ethylene, carbon monoxide (CO) and hydrogen (H2); or by flame synthesis techniques, which also consume premium feedstock fuels. Additionally, CVD methods are energy-intensive. Renewable and replenishable feedstocks, such as those found in municipal, industrial, agricultural recycling streams have a more judicious reason for usage, in the light of current emerging needs for sustainability. Agricultural sugarcane bagasse and corn residues, scrap tire chips as well as post-consumer polyethylene (PE) and polyethylene terephthalate (PET) bottle shreddings when either thermally treated by sole pyrolysis or by sequential pyrolysis and partial oxidation result in the formation of gaseous carbon-bearing effluents which when channeled into a heated reactor, produce CNMs, including carbon nano-tubes, catalytically synthesized therein on stainless steel meshes. The structure of the nano-material synthesized depends on the type of feedstock available for pyrolysis, and can be determined by analysing the feedstock. These feedstocks could supersede the use of costly and often toxic or highly-flammable chemicals such as hydrocarbon gases, carbon monoxide and hydrogen, which are commonly used as feedstocks in current nano-manufacturing process for CNMs.Keywords: nanomaterials, waste plastics, sugarcane bagasse, pyrolysis
Procedia PDF Downloads 228859 Barriers to Access among Indigenous Women Seeking Prenatal Care: A Literature Review
Authors: Zarish Jawad, Nikita Chugh, Karina Dadar
Abstract:
Introduction: This paper aims to identify barriers indigenous women face in accessing prenatal care in Canada. It explores the differences in prenatal care received between indigenous and non-indigenous women. The objective is to look at changes or programs in Canada's healthcare system to reduce barriers to accessing safe prenatal care for indigenous women. Methods: A literature search of 12 papers was conducted using the following databases: PubMed, Medline, OVID, Google Scholar, and ScienceDirect. The studies included were written in English only, including indigenous females between the age of 19-35, and review articles were excluded. Participants in the studies examined did not have any severe underlying medical conditions for the duration of the study, and study designs included in the review are prospective cohort, cross-sectional, case report, and case-control studies. Results: Among all the barriers Indigenous women face in accessing prenatal care, the three most significant barriers Indigenous women face include a lack of culturally safe prenatal care, lack of services in the Indigenous community, proximity of prenatal facilities to Indigenous communities and costs of transportation. Discussion: The study found three significant barriers indigenous women face in accessing prenatal care in Canada; the geographical distribution of healthcare facilities, distrust between patients and healthcare professionals, and cultural sensitivity. Some of the suggested solutions include building more birthing and prenatal care facilities in rural areas for indigenous women, educating healthcare professionals on culturally sensitive healthcare, and involving indigenous people in the decision-making process to reduce distrust and power imbalances. Conclusion: The involvement of indigenous women and community leaders is important in making decisions regarding the implementation of effective healthcare and prenatal programs for indigenous women. However, further research is required to understand the effectiveness of the solutions and the barriers that make prenatal care less accessible for indigenous women in Canada.Keywords: indigenous, maternal health, prenatal care, barriers
Procedia PDF Downloads 152858 Earthquake Retrofitting of Concrete Structures Using Steel Bracing with the Results of Linear and Nonlinear Static Analysis
Authors: Ehsan Sadie
Abstract:
The use of steel braces in concrete structures has been considered by researchers in recent decades due to its easy implementation, economics and the ability to create skylights in braced openings compared to shear wall openings as well as strengthening weak concrete structures to earthquakes. The purpose of this article is to improve and strengthen concrete structures with steel bracing. In addition, cases such as different numbers of steel braces in different openings of concrete structures and interaction between concrete frames and metal braces have been studied. In this paper, by performing static nonlinear analysis and examining ductility, the relative displacement of floors, examining the performance of samples, and determining the coefficient of behavior of composite frames (concrete frames with metal bracing), the behavior of reinforced concrete frames is compared with frame without bracing. The results of analyzes and studies show that the addition of metal bracing increases the strength and stiffness of the frame and reduces the ductility and lateral displacement of the structure. In general, the behavior of the structure against earthquakes will be improved.Keywords: behavior coefficient, bracing, concrete structure, convergent bracing, earthquake, linear static analysis, nonlinear analysis, pushover curve
Procedia PDF Downloads 177857 Evaluation of Stress Relief using Ultrasonic Peening in GTAW Welding and Stress Corrosion Cracking (SCC) in Stainless Steel, and Comparison with the Thermal Method
Authors: Hamidreza Mansouri
Abstract:
In the construction industry, the lifespan of a metal structure is directly related to the quality of welding. In most metal structures, the welded area is considered critical and is one of the most important factors in design. To date, many fracture incidents caused by these types of cracks have occurred. Various methods exist to increase the lifespan of welds to prevent failure in the welded area. Among these methods, the application of ultrasonic peening, in addition to the stress relief process, can manually and more precisely adjust the geometry of the weld toe and prevent stress concentration in this part. This research examined Gas Tungsten Arc Welding (GTAW) on common structural steels and 316 stainless steel, which require precise welding, to predict the optimal condition. The GTAW method was used to create residual stress; two samples underwent ultrasonic stress relief, and for comparison, two samples underwent thermal stress relief. Also, no treatment was considered for two samples. The residual stress of all six pieces was measured by X-Ray Diffraction (XRD) method. Then, the two ultrasonically stress-relieved samples and two untreated samples were exposed to a corrosive environment to initiate cracking and determine the effectiveness of the ultrasonic stress relief method. Thus, the residual stress caused by GTAW in the samples decreased by 3.42% with thermal treatment and by 7.69% with ultrasonic peening. Furthermore, the results show that the untreated sample developed cracks after 740 hours, while the ultrasonically stress-relieved piece showed no cracks. Given the high costs of welding and post-welding zone modification processes, finding an economical, effective, and comprehensive method that has the least limitations alongside a broad spectrum of usage is of great importance. Therefore, the impact of various ultrasonic peening stress relief parameters and the selection of the best stress relief parameter to achieve the longest lifespan for the weld area is highly significant.Keywords: GTAW welding, stress corrosion cracking(SCC), thermal method, ultrasonic peening.
Procedia PDF Downloads 50856 Discipline-Specific Culture: A Purpose-Based Investigation
Authors: Sihem Benaouda
Abstract:
English is gaining an international identity as it affects every academic and professional field in the world. Without increasing their cultural understanding, it would obviously be difficult to completely educate learners for communication in a globalised environment. The concept of culture is intricate and needs to be elucidated, especially in an English language teaching (ELT) context. The study focuses on the investigation of the cultural studies integrated into the different types of English for specific purposes (ESP) materials, as opposed to English for general purposes (EGP) textbooks. A qualitative methodology based on a triangulation of techniques was conducted through materials analysis of five textbooks in both advanced EGP and three types of ESP. In addition to a semi-structured interview conducted with Algerian ESP practitioners, data analysis results revealed that culture in ESP textbooks is not overtly isolated into chapters and that cultural studies are predominantly present in business and economics materials, namely English for hotel and catering staff, tourism, and flight attendants. However, implicit cultural instruction is signalled in the social sciences and is negligible in science and technology sources. In terms of content, cultural studies in EGP are more related to generic topics, whereas, in some ESP materials, the topics are rather oriented to the specific field they belong to. Furthermore, the respondents’ answers showed an unawareness of the importance of culture in ESP teaching, besides some disregard for culture teaching per se in ESP contexts.Keywords: ESP, EGP, cultural studies, textbooks, teaching, materials
Procedia PDF Downloads 108855 Experimental Verification of Similarity Criteria for Sound Absorption of Perforated Panels
Authors: Aleksandra Majchrzak, Katarzyna Baruch, Monika Sobolewska, Bartlomiej Chojnacki, Adam Pilch
Abstract:
Scaled modeling is very common in the areas of science such as aerodynamics or fluid mechanics, since defining characteristic numbers enables to determine relations between objects under test and their models. In acoustics, scaled modeling is aimed mainly at investigation of room acoustics, sound insulation and sound absorption phenomena. Despite such a range of application, there is no method developed that would enable scaling acoustical perforated panels freely, maintaining their sound absorption coefficient in a desired frequency range. However, conducted theoretical and numerical analyses have proven that it is not physically possible to obtain given sound absorption coefficient in a desired frequency range by directly scaling only all of the physical dimensions of a perforated panel, according to a defined characteristic number. This paper is a continuation of the research mentioned above and presents practical evaluation of theoretical and numerical analyses. The measurements of sound absorption coefficient of perforated panels were performed in order to verify previous analyses and as a result find the relations between full-scale perforated panels and their models which will enable to scale them properly. The measurements were conducted in a one-to-eight model of a reverberation chamber of Technical Acoustics Laboratory, AGH. Obtained results verify theses proposed after theoretical and numerical analyses. Finding the relations between full-scale and modeled perforated panels will allow to produce measurement samples equivalent to the original ones. As a consequence, it will make the process of designing acoustical perforated panels easier and will also lower the costs of prototypes production. Having this knowledge, it will be possible to emulate in a constructed model panels used, or to be used, in a full-scale room more precisely and as a result imitate or predict the acoustics of a modeled space more accurately.Keywords: characteristic numbers, dimensional analysis, model study, scaled modeling, sound absorption coefficient
Procedia PDF Downloads 196854 Nonstationary Increments and Casualty in the Aluminum Market
Authors: Andrew Clark
Abstract:
McCauley, Bassler, and Gunaratne show that integration I(d) processes as used in economics and finance do not necessarily produce stationary increments, which are required to determine causality in both the short term and the long term. This paper follows their lead and shows I(d) aluminum cash and futures log prices at daily and weekly intervals do not have stationary increments, which means prior causality studies using I(d) processes need to be re-examined. Wavelets based on undifferenced cash and futures log prices do have stationary increments and are used along with transfer entropy (versus cointegration) to measure causality. Wavelets exhibit causality at most daily time scales out to 1 year, and weekly time scales out to 1 year and more. To determine stationarity, localized stationary wavelets are used. LSWs have the benefit, versus other means of testing for stationarity, of using multiple hypothesis tests to determine stationarity. As informational flows exist between cash and futures at daily and weekly intervals, the aluminum market is efficient. Therefore, hedges used by producers and consumers of aluminum need not have a big concern in terms of the underestimation of hedge ratios. Questions about arbitrage given efficiency are addressed in the paper.Keywords: transfer entropy, nonstationary increments, wavelets, localized stationary wavelets, localized stationary wavelets
Procedia PDF Downloads 202853 Examining the Discursive Hegemony of British Energy Transition Narratives
Authors: Antonia Syn
Abstract:
Politicians’ outlooks on the nature of energy futures and an ‘Energy Transition’ have evolved considerably alongside a steady movement towards renewable energies, buttressed by lower technology costs, rising environmental concerns, and favourable national policy decisions. This paper seeks to examine the degree to which an energy transition has become an incontrovertible ‘status quo’ in parliament, and whether politicians share similar understandings of energy futures or narrate different stories under the same label. Parliamentarians construct different understandings of the same reality, in the form of co-existing and competing discourses, shaping and restricting how policy problems and solutions are understood and tackled. Approaching energy policymaking from a parliamentary discourse perspective draws directly from actors’ concrete statements, offering an alternative to policy literature debates revolving around inductive policy theories. This paper uses computer-assisted discourse analysis to describe fundamental discursive changes in British parliamentary debates around energy futures. By applying correspondence cluster analyses to Hansard transcripts from 1986 to 2010, we empirically measure the policy positions of Labour and Conservative politicians’ parliamentary speeches during legislatively salient moments preceding significant energy transition-related policy decisions. Results show the concept of a technology-based, market-driven transition towards fossil-free and nuclear-free renewables integration converged across Labour and the Conservatives within three decades. Specific storylines underwent significant change, particularly in relation to international outlooks, environmental framings, treatments of risk, and increases in rhetoric. This study contributes to a better understanding of the role politics plays in the energy transition, highlighting how politicians’ values and beliefs inevitably determine and delimit creative policymaking.Keywords: quantitative discourse analysis, energy transition, renewable energy, British parliament, public policy
Procedia PDF Downloads 153852 Study of the Energy Efficiency of Buildings under Tropical Climate with a View to Sustainable Development: Choice of Material Adapted to the Protection of the Environment
Authors: Guarry Montrose, Ted Soubdhan
Abstract:
In the context of sustainable development and climate change, the adaptation of buildings to the climatic context in hot climates is a necessity if we want to improve living conditions in housing and reduce the risks to the health and productivity of occupants due to thermal discomfort in buildings. One can find a wide variety of efficient solutions but with high costs. In developing countries, especially tropical countries, we need to appreciate a technology with a very limited cost that is affordable for everyone, energy efficient and protects the environment. Biosourced insulation is a product based on plant fibers, animal products or products from recyclable paper or clothing. Their development meets the objectives of maintaining biodiversity, reducing waste and protecting the environment. In tropical or hot countries, the aim is to protect the building from solar thermal radiation, a source of discomfort. The aim of this work is in line with the logic of energy control and environmental protection, the approach is to make the occupants of buildings comfortable, reduce their carbon dioxide emissions (CO2) and decrease their energy consumption (energy efficiency). We have chosen to study the thermo-physical properties of banana leaves and sawdust, especially their thermal conductivities, direct measurements were made using the flash method and the hot plate method. We also measured the heat flow on both sides of each sample by the hot box method. The results from these different experiences show that these materials are very efficient used as insulation. We have also conducted a building thermal simulation using banana leaves as one of the materials under Design Builder software. Air-conditioning load as well as CO2 release was used as performance indicator. When the air-conditioned building cell is protected on the roof by banana leaves and integrated into the walls with solar protection of the glazing, it saves up to 64.3% of energy and avoids 57% of CO2 emissions.Keywords: plant fibers, tropical climates, sustainable development, waste reduction
Procedia PDF Downloads 182851 Jointly Optimal Statistical Process Control and Maintenance Policy for Deteriorating Processes
Authors: Lucas Paganin, Viliam Makis
Abstract:
With the advent of globalization, the market competition has become a major issue for most companies. One of the main strategies to overcome this situation is the quality improvement of the product at a lower cost to meet customers’ expectations. In order to achieve the desired quality of products, it is important to control the process to meet the specifications, and to implement the optimal maintenance policy for the machines and the production lines. Thus, the overall objective is to reduce process variation and the production and maintenance costs. In this paper, an integrated model involving Statistical Process Control (SPC) and maintenance is developed to achieve this goal. Therefore, the main focus of this paper is to develop the jointly optimal maintenance and statistical process control policy minimizing the total long run expected average cost per unit time. In our model, the production process can go out of control due to either the deterioration of equipment or other assignable causes. The equipment is also subject to failures in any of the operating states due to deterioration and aging. Hence, the process mean is controlled by an Xbar control chart using equidistant sampling epochs. We assume that the machine inspection epochs are the times when the control chart signals an out-of-control condition, considering both true and false alarms. At these times, the production process will be stopped, and an investigation will be conducted not only to determine whether it is a true or false alarm, but also to identify the causes of the true alarm, whether it was caused by the change in the machine setting, by other assignable causes, or by both. If the system is out of control, the proper actions will be taken to bring it back to the in-control state. At these epochs, a maintenance action can be taken, which can be no action, or preventive replacement of the unit. When the equipment is in the failure state, a corrective maintenance action is performed, which can be minimal repair or replacement of the machine and the process is brought to the in-control state. SMDP framework is used to formulate and solve the joint control problem. Numerical example is developed to demonstrate the effectiveness of the control policy.Keywords: maintenance, semi-Markov decision process, statistical process control, Xbar control chart
Procedia PDF Downloads 91850 The Value of Dynamic Priorities in Motor Learning between Some Basic Skills in Beginner's Basketball, U14 Years
Authors: Guebli Abdelkader, Regiueg Madani, Sbaa Bouabdellah
Abstract:
The goals of this study are to find ways to determine the value of dynamic priorities in motor learning between some basic skills in beginner’s basketball (U14), based on skills of shooting and defense against the shooter. Our role is to expose the statistical results in compare & correlation between samples of study in tests skills for the shooting and defense against the shooter. In order to achieve this objective, we have chosen 40 boys in middle school represented in four groups, two controls group’s (CS1, CS2) ,and two experimental groups (ES1: training on skill of shooting, skill of defense against the shooter, ES2: experimental group training on skill of defense against the shooter, skill of shooting). For the statistical analysis, we have chosen (F & T) tests for the statistical differences, and test (R) for the correlation analysis. Based on the analyses statistics, we confirm the importance of classifying priorities of basketball basic skills during the motor learning process. Admit that the benefits of experimental group training are to economics in the time needed for acquiring new motor kinetic skills in basketball. In the priority of ES2 as successful dynamic motor learning method to enhance the basic skills among beginner’s basketball.Keywords: basic skills, basketball, motor learning, children
Procedia PDF Downloads 170849 A Translation Criticism of the Persian Translation of “A**Hole No More” Written by Xavier Crement
Authors: Mehrnoosh Pirhayati
Abstract:
Translation can be affected by different meta-textual factors of target context such as ideology, politics, and culture. So, the rule of fidelity, or being faithful to the source text, can be ignored by the translator. On the other hand, critical discourse analysis, derived from applied linguistics, is entered into the field of translation studies and used by scholars for revealing hidden deviations and possible roots of manipulations. This study focused on the famous Persian translation of the bestseller book, “A**hole No More,” written by XavierCrement 1990, performed by Mahmud Farjami to comparatively and critically analyze it with its corresponding English original book. The researcher applied Pirhayati’s model and framework of translation criticism at the textual and semiotic levels for this qualitative study. It should be noted that Kress and Van Leeuwen’s semiotic model, along with Machin’s model of typographical analysis, was also used at the semiotic level. The results of the comparisons and analyses indicate thatthis Persian translation of the book is affected by the factors of ideology and economics and reveal that the Islamic attitude causes the translator to employ some strategies such as substitution and deletion. Those who may benefit from this research are translation trainers, students of translation studies, critics, and scholars.Keywords: farjami (2013), Ideology, manipulation, pirhayati's (2013) model of translation criticism, Xavier crement (1990)
Procedia PDF Downloads 213848 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach
Authors: Nwachukwu Ifeanyi
Abstract:
Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.Keywords: computation, robotics, mathematics, simulation
Procedia PDF Downloads 58847 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 99846 Measuring Stakeholder Engagement and Drivers of Success in Ethiopian Tourism Sector
Authors: Gezahegn Gizaw
Abstract:
The FDRE Tourism Training Institute organizes forums for debates, best practices exchange and focus group discussions to forge a sustainable and growing tourism sector while minimizing negative impacts on the environment, communities, and cultures. This study aimed at applying empirical research method to identify and quantify relative importance of success factors and individual engagement indicators that were identified in these forums. Response to the 12-question survey was collected from a total of 437 respondents in academic training institutes (212), business executive and employee (204) and non-academic government offices (21). Overall, capacity building was perceived as the most important driver of success for stakeholder engagement. Business executive and employee category rated capacity building as the most important driver of success (53%), followed by decision-making process (27%) and community participation (20%). Among educators and students, both capacity building and decision-making process were perceived as the most important factors (40% of respondents), whereas community participation was perceived as the most important success factor only by 20% of respondents. Individual engagement score in capacity building, decision-making process and community participation showed highest variability by educational level of participants (variance of 3.4% - 5.2%, p<0.001). Individual engagement score in capacity building was highly correlated to perceived benefit of training on improved efficiency, job security, higher customer satisfaction and self-esteem. On the other hand, individual engagement score in decision making process was highly correlated to its perceived benefit on lowering business costs, improving ability to meet the needs of a target market, job security, self-esteem and more teamwork. The study provides a set of recommendations that help educators, business executives and policy makers to maximize the individual and synergetic effect of training, decision making process on sustainability and growth of the tourism sector in Ethiopia.Keywords: engagement score, driver of success, capacity building, tourism
Procedia PDF Downloads 77845 Development of Transmission and Packaging for Parallel Hybrid Light Commercial Vehicle
Authors: Vivek Thorat, Suhasini Desai
Abstract:
The hybrid electric vehicle is widely accepted as a promising short to mid-term technical solution due to noticeably improved efficiency and low emissions at competitive costs. Retro fitment of hybrid components into a conventional vehicle for achieving better performance is the best solution so far. But retro fitment includes major modifications into a conventional vehicle with a high cost. This paper focuses on the development of a P3x hybrid prototype with rear wheel drive parallel hybrid electric Light Commercial Vehicle (LCV) with minimum and low-cost modifications. This diesel Hybrid LCV is different from another hybrid with regard to the powertrain. The additional powertrain consists of continuous contact helical gear pair followed by chain and sprocket as a coupler for traction motor. Vehicle powertrain which is designed for the intended high-speed application. This work focuses on targeting of design, development, and packaging of this unique parallel diesel-electric vehicle which is based on multimode hybrid advantages. To demonstrate the practical applicability of this transmission with P3x hybrid configuration, one concept prototype vehicle has been build integrating the transmission. The hybrid system makes it easy to retrofit existing vehicle because the changes required into the vehicle chassis are a minimum. The additional system is designed for mainly five modes of operations which are engine only mode, electric-only mode, hybrid power mode, engine charging battery mode and regenerative braking mode. Its driving performance, fuel economy and emissions are measured and results are analyzed over a given drive cycle. Finally, the output results which are achieved by the first vehicle prototype during experimental testing is carried out on a chassis dynamometer using MIDC driving cycle. The results showed that the prototype hybrid vehicle is about 27% faster than the equivalent conventional vehicle. The fuel economy is increased by 20-25% approximately compared to the conventional powertrain.Keywords: P3x configuration, LCV, hybrid electric vehicle, ROMAX, transmission
Procedia PDF Downloads 254844 Connecting Critical Macro-Finance to Theories of Capitalism
Authors: Vithul Kalki
Abstract:
The mainstream political economy failed to explain the nature and causes of systemic failures and thus to compare and comprehend how contemporary capitalist systems work. An alternative research framework of Critical Macro-Finance (CMF) is an attempt to collaborate political theory with post-Keynesian economics with an objective to find answers to unresolved questions that emerged since the international financial crisis and repeated failures of capital systems. This unorthodox approach brings out four main propositions, namely : (a) that the adoption of American financial practices has anchored financial globalization in market-based finance; (b) that global finance is a set of interconnected, hierarchical balance sheets, increasingly subject to time-critical liquidity; (c) that credit creation in market-based finance involves new forms of money; and (d) that market-based finance structurally requires a de-risking state capable both of protecting systemic liabilities and creating new investment opportunities. The ongoing discussion of CMF literature is yet to be tested or even fully framed. This qualitative paper will critically examine the CMF framework and will engage in discussions aiming to connect the CMF with theories of capitalism in a wider context to bring a holistic approach for analyzing contemporary financial capitalism.Keywords: critical macro-finance, capitalism, financial system, comparative political economy
Procedia PDF Downloads 186843 The Impact of Technology on Architecture and Graphic Designs
Authors: Feby Zaki Raouf Fawzy
Abstract:
Nowadays, design and architecture are being affected and undergoing change with the rapid advancements in technology, economics, politics, society, and culture. Architecture has been transforming with the latest developments after the inclusion of computers in design. Integration of design into the computational environment has revolutionized architecture and unique perspectives in architecture have been gained. The history of architecture shows the various technological developments and changes in which architecture has transformed with time. Therefore, the analysis of integration between technology and the history of the architectural process makes it possible to build a consensus on the idea of how architecture is to proceed. In this study, each period that occurs with the integration of technology into architecture is addressed within the historical process. At the same time, changes in architecture via technology are identified as important milestones and predictions with regards to the future of architecture have been determined. Developments and changes in technology and the use of technology in architecture within years are analyzed in charts and graphs comparatively. The historical process of architecture and its transformation via technology is supported by a detailed literature review, and they are consolidated with the examination of focal points of 20th-century architecture under the titles parametric design, genetic architecture, simulation, and biomimicry. It is concluded that with the historical research between past and present, the developments in architecture cannot keep up with the advancements in technology, and recent developments in technology overshadow architecture; even technology decides the direction of architecture. As a result, a scenario is presented with regard to the reach of technology in the future of architecture and the role of the architect.Keywords: design and development the information technology architecture, enterprise architecture, enterprise architecture design result, TOGAF architecture development method (ADM)
Procedia PDF Downloads 69842 Methodology for the Integration of Object Identification Processes in Handling and Logistic Systems
Authors: L. Kiefer, C. Richter, G. Reinhart
Abstract:
The uprising complexity in production systems due to an increasing amount of variants up to customer innovated products leads to requirements that hierarchical control systems are not able to fulfil. Therefore, factory planners can install autonomous manufacturing systems. The fundamental requirement for an autonomous control is the identification of objects within production systems. In this approach an attribute-based identification is focused for avoiding dose-dependent identification costs. Instead of using an identification mark (ID) like a radio frequency identification (RFID)-Tag, an object type is directly identified by its attributes. To facilitate that it’s recommended to include the identification and the corresponding sensors within handling processes, which connect all manufacturing processes and therefore ensure a high identification rate and reduce blind spots. The presented methodology reduces the individual effort to integrate identification processes in handling systems. First, suitable object attributes and sensor systems for object identification in a production environment are defined. By categorising these sensor systems as well as handling systems, it is possible to match them universal within a compatibility matrix. Based on that compatibility further requirements like identification time are analysed, which decide whether the combination of handling and sensor system is well suited for parallel handling and identification within an autonomous control. By analysing a list of more than thousand possible attributes, first investigations have shown, that five main characteristics (weight, form, colour, amount, and position of subattributes as drillings) are sufficient for an integrable identification. This knowledge limits the variety of identification systems and leads to a manageable complexity within the selection process. Besides the procedure, several tools, as an example a sensor pool are presented. These tools include the generated specific expert knowledge and simplify the selection. The primary tool is a pool of preconfigured identification processes depending on the chosen combination of sensor and handling device. By following the defined procedure and using the created tools, even laypeople out of other scientific fields can choose an appropriate combination of handling devices and sensors which enable parallel handling and identification.Keywords: agent systems, autonomous control, handling systems, identification
Procedia PDF Downloads 177841 NanoCelle®: A Nano Delivery Platform to Enhance Medicine
Authors: Sean Hall
Abstract:
Nanosystems for drug delivery are not new; as medicines evolve, so too does the desire to deliver a more targeted, patient-compliant medicine. Though, historically the widespread use of nanosystems for drug delivery has been fouled by non-replicability, scalability, toxicity issues, and economics. Examples include steps of manufacture and thus cost to manufacture, toxicity for nanoparticle scaffolding, autoimmune response, and considerable technical expertise for small non-commercial yields. This, unfortunately, demonstrates the not-so-obvious chasm between science and drug formulation for regulatory approval. Regardless there is a general and global desire to improve the delivery of medicines, reduce potential side effect profiles, promote increased patient compliance, and increase and/or speed public access to medicine availability. In this paper, the author will discuss NanoCelle®, a nano-delivery platform that specifically addresses degradation and solubility issues that expands from fundamental micellar preparations. NanoCelle® has been deployed in several Australian listed medicines and is in use of several drug candidates across small molecules, with research endeavors now extending into large molecules. The author will discuss several research initiatives as they relate to NanoCelle® to demonstrate similarities seen in various drug substances; these examples will include both in vitro and in vivo work.Keywords: NanoCelle®, micellar, degradation, solubility, toxicity
Procedia PDF Downloads 180840 PET Image Resolution Enhancement
Authors: Krzysztof Malczewski
Abstract:
PET is widely applied scanning procedure in medical imaging based research. It delivers measurements of functioning in distinct areas of the human brain while the patient is comfortable, conscious and alert. This article presents the new compression sensing based super-resolution algorithm for improving the image resolution in clinical Positron Emission Tomography (PET) scanners. The issue of motion artifacts is well known in Positron Emission Tomography (PET) studies as its side effect. The PET images are being acquired over a limited period of time. As the patients cannot hold breath during the PET data gathering, spatial blurring and motion artefacts are the usual result. These may lead to wrong diagnosis. It is shown that the presented approach improves PET spatial resolution in cases when Compressed Sensing (CS) sequences are used. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were traditionally thought necessary. The application of CS to PET has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the goal is to combine super-resolution image enhancement algorithm with CS framework to achieve high resolution PET output. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity.Keywords: PET, super-resolution, image reconstruction, pattern recognition
Procedia PDF Downloads 371839 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe
Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis
Abstract:
The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.Keywords: Terrain Builder, WebGL, Virtual Globe, CesiumJS, Tiled Map Service, TMS, Height-Map, Regular Grid, Geovisual Analytics, DTM
Procedia PDF Downloads 426838 The Effect of Artificial Intelligence on Urbanism, Architecture and Environmental Conditions
Authors: Abanoub Rady Shaker Saleb
Abstract:
Nowadays, design and architecture are being affected and underwent change with the rapid advancements in technology, economics, politics, society and culture. Architecture has been transforming with the latest developments after the inclusion of computers into design. Integration of design into the computational environment has revolutionized the architecture and new perspectives in architecture have been gained. The history of architecture shows the various technological developments and changes in which the architecture has transformed with time. Therefore, the analysis of integration between technology and the history of the architectural process makes it possible to build a consensus on the idea of how architecture is to proceed. In this study, each period that occurs with the integration of technology into architecture is addressed within historical process. At the same time, changes in architecture via technology are identified as important milestones and predictions with regards to the future of architecture have been determined. Developments and changes in technology and the use of technology in architecture within years are analyzed in charts and graphs comparatively. The historical process of architecture and its transformation via technology are supported with detailed literature review and they are consolidated with the examination of focal points of 20th-century architecture under the titles; parametric design, genetic architecture, simulation, and biomimicry. It is concluded that with the historical research between past and present; the developments in architecture cannot keep up with the advancements in technology and recent developments in technology overshadow the architecture, even the technology decides the direction of architecture. As a result, a scenario is presented with regards to the reach of technology in the future of architecture and the role of the architect.Keywords: design and development the information technology architecture, enterprise architecture, enterprise architecture design result, TOGAF architecture development method (ADM)
Procedia PDF Downloads 69837 Policy Recommendations for Reducing CO2 Emissions in Kenya's Electricity Generation, 2015-2030
Authors: Paul Kipchumba
Abstract:
Kenya is an East African Country lying at the Equator. It had a population of 46 million in 2015 with an annual growth rate of 2.7%, making a population of at least 65 million in 2030. Kenya’s GDP in 2015 was about 63 billion USD with per capita GDP of about 1400 USD. The rural population is 74%, whereas urban population is 26%. Kenya grapples with not only access to energy but also with energy security. There is direct correlation between economic growth, population growth, and energy consumption. Kenya’s energy composition is at least 74.5% from renewable energy with hydro power and geothermal forming the bulk of it; 68% from wood fuel; 22% from petroleum; 9% from electricity; and 1% from coal and other sources. Wood fuel is used by majority of rural and poor urban population. Electricity is mostly used for lighting. As of March 2015 Kenya had installed electricity capacity of 2295 MW, making a per capital electricity consumption of 0.0499 KW. The overall retail cost of electricity in 2015 was 0.009915 USD/ KWh (KES 19.85/ KWh), for installed capacity over 10MW. The actual demand for electricity in 2015 was 3400 MW and the projected demand in 2030 is 18000 MW. Kenya is working on vision 2030 that aims at making it a prosperous middle income economy and targets 23 GW of generated electricity. However, cost and non-cost factors affect generation and consumption of electricity in Kenya. Kenya does not care more about CO2 emissions than on economic growth. Carbon emissions are most likely to be paid by future costs of carbon emissions and penalties imposed on local generating companies by sheer disregard of international law on C02 emissions and climate change. The study methodology was a simulated application of carbon tax on all carbon emitting sources of electricity generation. It should cost only USD 30/tCO2 tax on all emitting sources of electricity generation to have solar as the only source of electricity generation in Kenya. The country has the best evenly distributed global horizontal irradiation. Solar potential after accounting for technology efficiencies such as 14-16% for solar PV and 15-22% for solar thermal is 143.94 GW. Therefore, the paper recommends adoption of solar power for generating all electricity in Kenya in order to attain zero carbon electricity generation in the country.Keywords: co2 emissions, cost factors, electricity generation, non-cost factors
Procedia PDF Downloads 365836 Effects of Artificial Nectar Feeders on Bird Distribution and Erica Visitation Rate in the Cape Fynbos
Authors: Monique Du Plessis, Anina Coetzee, Colleen L. Seymour, Claire N. Spottiswoode
Abstract:
Artificial nectar feeders are used to attract nectarivorous birds to gardens and are increasing in popularity. The costs and benefits of these feeders remain controversial, however. Nectar feeders may have positive effects by attracting nectarivorous birds towards suburbia, facilitating their urban adaptation, and supplementing bird diets when floral resources are scarce. However, this may come at the cost of luring them away from the plants they pollinate in neighboring indigenous vegetation. This study investigated the effect of nectar feeders on an African pollinator-plant mutualism. Given that birds are important pollinators to many fynbos plant species, this study was conducted in gardens and natural vegetation along the urban edge of the Cape Peninsula. Feeding experiments were carried out to compare relative bird abundance and local distribution patterns for nectarivorous birds (i.e., sunbirds and sugarbirds) between feeder and control treatments. Resultant changes in their visitation rates to Erica flowers in the natural vegetation were tested by inspection of their anther ring status. Nectar feeders attracted higher densities of nectarivores to gardens relative to natural vegetation and decreased their densities in the neighboring fynbos, even when floral abundance in the neighboring vegetation was high. The consequent changes to their distribution patterns and foraging behavior decreased their visitation to at least Erica plukenetii flowers (but not to Erica abietina). This study provides evidence that nectar feeders may have positive effects for birds themselves by reducing their urban sensitivity but also highlights the unintended negative effects feeders may have on the surrounding fynbos ecosystem. Given that nectar feeders appear to compete with the flowers of Erica plukenetii, and perhaps those of other Erica species, artificial feeding may inadvertently threaten bird-plant pollination networks.Keywords: avian nectarivores, bird feeders, bird pollination, indirect effects in human-wildlife interactions, sugar water feeders, supplementary feeding
Procedia PDF Downloads 155