Search results for: hard turning
73 Data Quality on Regular Childhood Immunization Programme at Degehabur District: Somali Region, Ethiopia
Authors: Eyob Seife
Abstract:
Immunization is a life-saving intervention which prevents needless suffering through sickness, disability, and death. Emphasis on data quality and use will become even stronger with the development of the immunization agenda 2030 (IA2030). Quality of data is a key factor in generating reliable health information that enables monitoring progress, financial planning, vaccine forecasting capacities, and making decisions for continuous improvement of the national immunization program. However, ensuring data of sufficient quality and promoting an information-use culture at the point of the collection remains critical and challenging, especially in hard-to-reach and pastoralist areas where Degehabur district is selected based on a hypothesis of ‘there is no difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical, and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Degehabur district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers, and reporting documents were reviewed at 5 health facilities (2 health centers and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and the district health office. A quality index (QI) was assessed, and the accuracy ratio formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed both over-reporting and under-reporting were observed at health posts when computing the accuracy ratio of the tally sheet to health post reports found at health centers for almost all antigens verified where pentavalent 1 was 88.3%, 60.4%, and 125.6% for Health posts A, B, and C respectively. For first-dose measles-containing vaccines (MCV), similarly, the accuracy ratio was found to be 126.6%, 42.6%, and 140.9% for Health posts A, B, and C, respectively. The accuracy ratio for fully immunized children also showed 0% for health posts A and B and 100% for health post-C. A relatively better accuracy ratio was seen at health centers where the first pentavalent dose was 97.4% and 103.3% for health centers A and B, while a first dose of measles-containing vaccines (MCV) was 89.2% and 100.9% for health centers A and B, respectively. A quality index (QI) of all facilities also showed results between the maximum of 33.33% and a minimum of 0%. Most of the verified immunization data accuracy ratios were found to be relatively better at the health center level. However, the quality of the monitoring system is poor at all levels, besides poor data accuracy at all health posts. So attention should be given to improving the capacity of staff and quality of monitoring system components, namely recording, reporting, archiving, data analysis, and using information for decision at all levels, especially in pastoralist areas where such kinds of study findings need to be improved beside to improving the data quality at root and health posts level.Keywords: accuracy ratio, Degehabur District, regular childhood immunization program, quality of monitoring system, Somali Region-Ethiopia
Procedia PDF Downloads 10772 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 22871 Averting a Financial Crisis through Regulation, Including Legislation
Authors: Maria Krambia-Kapardis, Andreas Kapardis
Abstract:
The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.Keywords: financial crisis, legislation, regulation, financial regulation
Procedia PDF Downloads 39870 A Supply Chain Risk Management Model Based on Both Qualitative and Quantitative Approaches
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
In today’s business, it is well-recognized that risk is an important factor that needs to be taken into consideration before a decision is made. Studies indicate that both the number of risks faced by organizations and their potential consequences are growing. Supply chain risk management has become one of the major concerns for practitioners and researchers. Supply chain leaders and scholars are now focusing on the importance of managing supply chain risk. In order to meet the challenge of managing and mitigating supply chain risk (SCR), we must first identify the different dimensions of SCR and assess its relevant probability and severity. SCR has been classified in many different ways, and there are no consistently accepted dimensions of SCRs and several different classifications are reported in the literature. Basically, supply chain risks can be classified into two dimensions namely disruption risk and operational risk. Disruption risks are those caused by events such as bankruptcy, natural disasters and terrorist attack. Operational risks are related to supply and demand coordination and uncertainty, such as uncertain demand and uncertain supply. Disruption risks are rare but severe and hard to manage, while operational risk can be reduced through effective SCM activities. Other SCRs include supply risk, process risk, demand risk and technology risk. In fact, the disorganized classification of SCR has created confusion for SCR scholars. Moreover, practitioners need to identify and assess SCR. As such, it is important to have an overarching framework tying all these SCR dimensions together for two reasons. First, it helps researchers use these terms for communication of ideas based on the same concept. Second, a shared understanding of the SCR dimensions will support the researchers to focus on the more important research objective: operationalization of SCR, which is very important for assessing SCR. In general, fresh food supply chain is subject to certain level of risks, such as supply risk (low quality, delivery failure, hot weather etc.) and demand risk (season food imbalance, new competitors). Effective strategies to mitigate fresh food supply chain risk are required to enhance operations. Before implementing effective mitigation strategies, we need to identify the risk sources and evaluate the risk level. However, assessing the supply chain risk is not an easy matter, and existing research mainly use qualitative method, such as risk assessment matrix. To address the relevant issues, this paper aims to analyze the risk factor of the fresh food supply chain using an approach comprising both fuzzy logic and hierarchical holographic modeling techniques. This novel approach is able to take advantage the benefits of both of these well-known techniques and at the same time offset their drawbacks in certain aspects. In order to develop this integrated approach, substantial research work is needed to effectively combine these two techniques in a seamless way, To validate the proposed integrated approach, a case study in a fresh food supply chain company was conducted to verify the feasibility of its functionality in a real environment.Keywords: fresh food supply chain, fuzzy logic, hierarchical holographic modelling, operationalization, supply chain risk
Procedia PDF Downloads 24369 Improving Binding Selectivity in Molecularly Imprinted Polymers from Templates of Higher Biomolecular Weight: An Application in Cancer Targeting and Drug Delivery
Authors: Ben Otange, Wolfgang Parak, Florian Schulz, Michael Alexander Rubhausen
Abstract:
The feasibility of extending the usage of molecular imprinting technique in complex biomolecules is demonstrated in this research. This technique is promising in diverse applications in areas such as drug delivery, diagnosis of diseases, catalysts, and impurities detection as well as treatment of various complications. While molecularly imprinted polymers MIP remain robust in the synthesis of molecules with remarkable binding sites that have high affinities to specific molecules of interest, extending the usage to complex biomolecules remains futile. This work reports on the successful synthesis of MIP from complex proteins: BSA, Transferrin, and MUC1. We show in this research that despite the heterogeneous binding sites and higher conformational flexibility of the chosen proteins, relying on their respective epitopes and motifs rather than the whole template produces highly sensitive and selective MIPs for specific molecular binding. Introduction: Proteins are vital in most biological processes, ranging from cell structure and structural integrity to complex functions such as transport and immunity in biological systems. Unlike other imprinting templates, proteins have heterogeneous binding sites in their complex long-chain structure, which makes their imprinting to be marred by challenges. In addressing this challenge, our attention is inclined toward the targeted delivery, which will use molecular imprinting on the particle surface so that these particles may recognize overexpressed proteins on the target cells. Our goal is thus to make surfaces of nanoparticles that specifically bind to the target cells. Results and Discussions: Using epitopes of BSA and MUC1 proteins and motifs with conserved receptors of transferrin as the respective templates for MIPs, significant improvement in the MIP sensitivity to the binding of complex protein templates was noted. Through the Fluorescence Correlation Spectroscopy FCS measurements on the size of protein corona after incubation of the synthesized nanoparticles with proteins, we noted a high affinity of MIPs to the binding of their respective complex proteins. In addition, quantitative analysis of hard corona using SDS-PAGE showed that only a specific protein was strongly bound on the respective MIPs when incubated with similar concentrations of the protein mixture. Conclusion: Our findings have shown that the merits of MIPs can be extended to complex molecules of higher biomolecular mass. As such, the unique merits of the technique, including high sensitivity and selectivity, relative ease of synthesis, production of materials with higher physical robustness, and higher stability, can be extended to more templates that were previously not suitable candidates despite their abundance and usage within the body.Keywords: molecularly imprinted polymers, specific binding, drug delivery, high biomolecular mass-templates
Procedia PDF Downloads 5568 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System
Authors: Masoud Mirzaee, Ghobad Behzadi Pour
Abstract:
An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure
Procedia PDF Downloads 24967 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application
Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough
Abstract:
In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.Keywords: casting, cast iron, microstructure, heat treating
Procedia PDF Downloads 10566 A Negotiation Model for Understanding the Role of International Law in Foreign Policy Crises
Authors: William Casto
Abstract:
Studies that consider the actual impact of international law upon foreign affairs crises are flawed by an unrealistic model of decision making. The common, unexamined assumption is that a nation has a unitary executive or ruler who considers a wide variety of considerations, including international law, in attempting to resolve a crisis. To the extent that negotiation theory is considered, the focus is on negotiations between or among nations. The unsettling result is a shallow focus that concentrates on each country’s public posturing about international law. The country-to-country model ignores governments’ internal negotiations that lead to their formal position in a crisis. The model for foreign policy crises needs to be supplemented to include a model of internal negotiations. Important foreign policy decisions come from groups within a government committee, advisers, etc. Within these groups, participants may have differing agendas and resort to international law to bolster their positions. To understand the influence of international law in international crises, these internal negotiations must be considered. These negotiations are crucial to creating a foreign policy agenda or recommendations. External negotiations between the two nations are significant, but the internal negotiations provide a better understanding of the actual influence of international law upon international crises. Discovering the details of specific internal negotiations is quite difficult but not necessarily impossible. The present proposal will use a specific crisis to illustrate the role of international law. In 1861 during the American Civil War, a United States navy captain stopped a British mail ship and removed two ambassadors of the rebelling southern states. The result was what is commonly called the Trent Affair. In the wake of the captain’s unauthorized and rash action, Great Britain seriously considered going to war against the United States. A detailed analysis of the Trent Affair is possible using the available and extensive internal British correspondence and memoranda to reach an understanding of the effect of international law upon decision making. The extensive trove of internal British documents is particularly valuable because in 1861, the only effective means of communication was face-to-face or through letters. Telephones did not exist, and travel by horse and carriage was tedious. The British documents tell us how individual participants viewed the process. We can approach an accurate understanding of what actually happened as the British government strove to resolve the crisis. For example, British law officers initially concluded that the American captain’s rash act was permissible under international law. Later, the law officers revised their opinion. A model of internal negotiation is particularly valuable because it strips away nations’ public posturing about disputed international law principles. In internal decision making, there is room for meaningful debate over the relevant principles. This fluid debate tells how international law is used to develop a hard, public bargaining position. The Trent Affair indicates that international law had an actual influence upon the crisis and that law was not mere window dressing for the government’s public position.Keywords: foreign affairs crises, negotiation, international law, Trent affair
Procedia PDF Downloads 12765 Municipal Solid Waste Management in Ethiopia: Systematic Review of Physical and Chemical Compositions and Generation Rate
Authors: Tsegay Kahsay Gebrekidan, Gebremariam Gebrezgabher Gebremedhin, Abraha Kahsay Weldemariam, Meaza Kidane Teferi
Abstract:
Municipal solid waste management (MSWM) in Ethiopia is a complex issue with institutional, social, political, environmental, and economic dimensions, impacting sustainable development. Effective MSWM planning necessitates understanding the generation rate and composition of waste. This systematic review synthesizes qualitative and quantitative data from various sources to aggregate current knowledge, identify gaps, and provide a comprehensive understanding of municipal solid waste management in Ethiopia. The findings reveal that the generation rate of municipal solid waste in Ethiopia is 0.38 kg/ca/day, with the waste composition being predominantly food waste, followed by ash, dust, and sand, and yard waste. Over 85% of this MSW is either reusable or recyclable, with a significant portion being organic matter (73.13% biodegradable) and 11.78% recyclable materials. Physicochemical analyses reveal that Ethiopian MSW is suitable for composting and biogas production, offering opportunities to reduce environmental pollution, and GHGs, support urban agriculture, and create job opportunities. However; challenges persist, including a lack of political will, weak municipal planning, limited community awareness, and inadequate waste management infrastructure, and only 31.8% of MSW is collected legally, leading to inefficient and harmful disposal practices. To improve MSWM, Ethiopia should focus on public awareness; increased funding, infrastructure investment, private sector partnerships, and implementing the 4 R principles (reduce, reuse, and recycle). An integrated approach involving government, industry, and civil society is essential. Further research on the physicochemical properties and strategic uses of MSW is needed to enhance management practices. Implications: The comprehensive study of municipal solid waste management (MSWM) in Ethiopia reveals the intricate interplay of institutional, social, political, environmental, and economic factors that influence the nation’s sustainable development. The findings underscore the urgent need for tailored, integrated waste management strategies that are informed by a thorough understanding of MSW generation rates, composition, and current management practices. Ethiopia’s lower per capita MSW generation compared to developed countries and the predominantly organic composition of its waste present significant opportunities for sustainable waste management practices such as composting and recycling. These practices can not only minimize the environmental impact but also support urban greening, agriculture, and renewable energy production. The high organic content, suitable physicochemical properties of MSW for composting, and potential for biogas and briquette production highlight pathways for creating employment, reducing waste, and enhancing soil fertility. Despite these opportunities, Ethiopia faces substantial challenges due to inadequate political will, weak municipal planning, limited community awareness, insufficient waste management infrastructure, and poor policy implementation. The high rate of illegal waste disposal further exacerbates environmental and health issues, emphasizing the need for a more effective and integrated MSWM approach. To address these challenges and harness the potential of MSW, Ethiopia must prioritize increasing public awareness; investing in infrastructure, fostering private sector partnerships, and implementing the principles of reduce, reuse, and recycle (3 R). Developing strategies that involve all stakeholders and turning waste into valuable resources is crucial. Government, industry, and civil society must collaborate to implement integrated MSWM systems that focus on waste reduction at the source, alternative material use, and advanced recycling technologies. Further research at both federal and regional levels is essential to optimize the physicochemical analysis and strategic use of MSW. Prompt action is required to transform waste management into a pillar of sustainable urban development, ultimately improving environmental quality and human health in Ethiopia.Keywords: biodegradable, healthy environment, integrated solid waste management, municipal
Procedia PDF Downloads 1464 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR
Authors: Ionut Vintu, Stefan Laible, Ruth Schulz
Abstract:
Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection
Procedia PDF Downloads 13963 An Investigation into Why Very Few Small Start-Ups Business Survive for Longer Than Three Years: An Explanatory Study in the Context of Saudi Arabia
Authors: Motaz Alsolaim
Abstract:
Nowadays, the challenges of running a start-up can be very complex and are perhaps more difficult than at any other time in the past. Changes in technology, manufacturing innovation, and product development, combined with intense competition and market regulations are factors that have put pressure on classic ways of managing firms, thereby forcing change. As a result, the rate of closure, exit or discontinuation of start-ups and young businesses is very high. Despite the essential role of small firms in an economy, they still tend to face obstacles that exert a negative influence on their performance and rate of survival. In fact, it is not easy to determine with any certainty the reasons why small firms fail. For this reason, failure itself is not clearly defined, and its exact causes are hard to diagnose. In this current study, therefore, the barriers to survival will be covered more broadly, especially personal/entrepreneurial, enterprise and environmental factors with regard to various possible reasons for this failure, in order to determine the best solutions and make appropriate recommendations. Methodology: It could be argued that mixed methods might help to improve entrepreneurship research addressing challenges emphasis in previous studies and to achieve the triangulation. Calls for the combined use of quantitative and qualitative research were also made in the entrepreneurship field since entrepreneurship is a multi-faceted area of research. Therefore, explanatory sequential mixed method was used, using questionnaire online survey for entrepreneurs, followed by semi-structure interview. Collecting over 750 surveys and accepting 296 valid surveys, after that 13 interviews from government official seniors, businessmen successful entrepreneurs, and non-successful entrepreneurs. Findings: The first phase findings ( quantitative) shows the obstacles to survive; starting from the personal/ entrepreneurial factors such as; past work experience, lack of skills and interest, are positive factors, while; gender, age and education level of the owner are negative factors. Internal factors such as lack of marketing research and weak business planning are positive. The environmental factors; in economic perspectives; difficulty to find labors, in socio-cultural perspectives; Social restriction and traditions found to be a negative factors. In other hand, from the political perspective; cost of compliance and insufficient government plans found to be a positive factors for small business failure. From infrastructure perspective; lack of skills labor, high level of bureaucracy and lack of information are positive factors. Conclusion: This paper serves to enrich the understanding of failure factors in MENA region more precisely in SA, by minimizing the probability of failure in small-micro entrepreneurial start-up in SA, in the light of the Saudi government’s Vision 2030 plan.Keywords: small business barriers, start-up business, entrepreneurship, Saudi Arabia
Procedia PDF Downloads 17762 Augmented Reality Enhanced Order Picking: The Potential for Gamification
Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis
Abstract:
Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification
Procedia PDF Downloads 15061 Globalisation and Diplomacy: How Can Small States Improve the Practice of Diplomacy to Secure Their Foreign Policy Objectives?
Authors: H. M. Ross-McAlpine
Abstract:
Much of what is written on diplomacy, globalization and the global economy addresses the changing nature of relationships between major powers. While the most dramatic and influential changes have resulted from these developing relationships the world is not, on deeper inspection, governed neatly by major powers. Due to advances in technology, the shifting balance of power and a changing geopolitical order, small states have the ability to exercise a greater influence than ever before. Increasingly interdependent and ever complex, our world is too delicate to be handled by a mighty few. The pressure of global change requires small states to adapt their diplomatic practices and diversify their strategic alliances and relationships. The nature and practice of diplomacy must be re-evaluated in light of the pressures resulting from globalization. This research examines: how small states can best secure their foreign policy objectives? Small state theory is used as a foundation for exploring the case study of New Zealand. The research draws on secondary sources to evaluate the existing theory in relation to modern practices of diplomacy. As New Zealand lacks the required economic and military power to play an active, influential role in international affairs what strategies are used to exert influence? Furthermore, New Zealand lies in a remote corner of the Pacific and is geographically isolated from its nearest neighbors how does this affect security and trade priorities? The findings note a significant shift since the 1970’s in New Zealand’s diplomatic relations. This shift is arguably a direct result of globalization, regionalism and a growing independence from the traditional bi-lateral relationships. The need to source predictable trade, investment and technology are an essential driving force for New Zealand’s diplomatic relations. A lack of hard power aligns New Zealand’s prosperity with a secure, rules-based international system that increases the likelihood of a stable and secure global order. New Zealand’s diplomacy and prosperity has been intrinsically reliant on its reputation. A vital component of New Zealand’s diplomacy is preserving a reputation for integrity and global responsibility. It is the use of this soft power that facilitates the influence that New Zealand enjoys on the world stage. To weave a comprehensive network of successful diplomatic relationships, New Zealand must maintain a reputation of international credibility. Globalization has substantially influenced the practice of diplomacy for New Zealand. The current world order places economic and military might in the hands of a few, subsequently requiring smaller states to use other means for securing their interests. There are clear strategies evident in New Zealand’s diplomacy practice that draw attention to how other smaller states might best secure their foreign policy objectives. While these findings are limited, as with all case study research, there is value in applying the findings to other small states struggling to secure their interests in the wake of rapid globalization.Keywords: diplomacy, foreign policy, globalisation, small state
Procedia PDF Downloads 39660 Navigating Complex Communication Dynamics in Qualitative Research
Authors: Kimberly M. Cacciato, Steven J. Singer, Allison R. Shapiro, Julianna F. Kamenakis
Abstract:
This study examines the dynamics of communication among researchers and participants who have various levels of hearing, use multiple languages, have various disabilities, and who come from different social strata. This qualitative methodological study focuses on the strategies employed in an ethnographic research study examining the communication choices of six sets of parents who have Deaf-Disabled children. The participating families varied in their communication strategies and preferences including the use of American Sign Language (ASL), visual-gestural communication, multiple spoken languages, and pidgin forms of each of these. The research team consisted of two undergraduate students proficient in ASL and a Deaf principal investigator (PI) who uses ASL and speech as his main modes of communication. A third Hard-of-Hearing undergraduate student fluent in ASL served as an objective facilitator of the data analysis. The team created reflexive journals by audio recording, free writing, and responding to team-generated prompts. They discussed interactions between the members of the research team, their evolving relationships, and various social and linguistic power differentials. The researchers reflected on communication during data collection, their experiences with one another, and their experiences with the participating families. Reflexive journals totaled over 150 pages. The outside research assistant reviewed the journals and developed follow up open-ended questions and prods to further enrich the data. The PI and outside research assistant used NVivo qualitative research software to conduct open inductive coding of the data. They chunked the data individually into broad categories through multiple readings and recognized recurring concepts. They compared their categories, discussed them, and decided which they would develop. The researchers continued to read, reduce, and define the categories until they were able to develop themes from the data. The research team found that the various communication backgrounds and skills present greatly influenced the dynamics between the members of the research team and with the participants of the study. Specifically, the following themes emerged: (1) students as communication facilitators and interpreters as barriers to natural interaction, (2) varied language use simultaneously complicated and enriched data collection, and (3) ASL proficiency and professional position resulted in a social hierarchy among researchers and participants. In the discussion, the researchers reflected on their backgrounds and internal biases of analyzing the data found and how social norms or expectations affected the perceptions of the researchers in writing their journals. Through this study, the research team found that communication and language skills require significant consideration when working with multiple and complex communication modes. The researchers had to continually assess and adjust their data collection methods to meet the communication needs of the team members and participants. In doing so, the researchers aimed to create an accessible research setting that yielded rich data but learned that this often required compromises from one or more of the research constituents.Keywords: American Sign Language, complex communication, deaf-disabled, methodology
Procedia PDF Downloads 11859 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 16058 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence
Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai
Abstract:
The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing
Procedia PDF Downloads 25257 Characterisation, Extraction of Secondary Metabolite from Perilla frutescens for Therapeutic Additives: A Phytogenic Approach
Authors: B. M. Vishal, Monamie Basu, Gopinath M., Rose Havilah Pulla
Abstract:
Though there are several methods of synthesizing silver nano particles, Green synthesis always has its own dignity. Ranging from the cost-effectiveness to the ease of synthesis, the process is simplified in the best possible way and is one of the most explored topics. This study of extracting secondary metabolites from Perilla frutescens and using them for therapeutic additives has its own significance. Unlike the other researches that have been done so far, this study aims to synthesize Silver nano particles from Perilla frutescens using three available forms of the plant: leaves, seed, and commercial leaf extract powder. Perilla frutescens, commonly known as 'Beefsteak Plant', is a perennial plant and belongs to the mint family. The plant has two varieties classed within itself. They are frutescens crispa and frutescens frutescens. The species, frutescens crispa (commonly known as 'Shisho' in Japanese), is generally used for edible purposes. Its leaves occur in two forms, varying on the colors. It is found in two different colors of red with purple streaks and green with crinkly pattern on it. This species is aromatic due to the presence of two major compounds: polyphenols and perillaldehyde. The red (purple streak) variety of this plant is due to the presence of a pigment, Perilla anthocyanin. The species, frutescens frutescens (commonly known as 'Egoma' in Japanese), is the main source for perilla oil. This species is also aromatic, but in this case, the major compound which gives the aroma is Perilla ketone or egoma ketone. Shisho grows short as compared with Wild Sesame and both produce seeds. The seeds of Wild Sesame are large and soft whereas that of Shisho is small and hard. The seeds have a large proportion of lipids, ranging about 38-45 percent. Excluding those, the seeds have a large quantity of Omega-3 fatty acids, linoleic acid, and an Omega-6 fatty acid. Other than these, Perilla leaf extract has gold and silver nano particles in it. The yield comparison in all the cases have been done, and the process’ optimal conditions were modified, keeping in mind the efficiencies. The characterization of secondary metabolites includes GC-MS and FTIR which can be used to identify the components of purpose that actually helps in synthesizing silver nano particles. The analysis of silver was done through a series of characterization tests that include XRD, UV-Vis, EDAX, and SEM. After the synthesis, for being used as therapeutic additives, the toxin analysis was done, and the results were tabulated. The synthesis of silver nano particles was done in a series of multiple cycles of extraction from leaves, seeds and commercially purchased leaf extract. The yield and efficiency comparison were done to bring out the best and the cheapest possible way of synthesizing silver nano particles using Perilla frutescens. The synthesized nano particles can be used in therapeutic drugs, which has a wide range of application from burn treatment to cancer treatment. This will, in turn, replace the traditional processes of synthesizing nano particles, as this method will prove effective in terms of cost and the environmental implications.Keywords: nanoparticles, green synthesis, Perilla frutescens, characterisation, toxin analysis
Procedia PDF Downloads 23356 Design of Smart Catheter for Vascular Applications Using Optical Fiber Sensor
Authors: Lamiek Abraham, Xinli Du, Yohan Noh, Polin Hsu, Tingting Wu, Tom Logan, Ifan Yen
Abstract:
In the field of minimally invasive, smart medical instruments such as catheters and guidewires are typically used at a remote distance to gain access to the diseased artery, often negotiating tortuous, complex, and diseased vessels in the process. Three optical fiber sensors with a diameter of 1.5mm each that are 120° apart from each other is proposed to be mounted into a catheter-based pump device with a diameter of 10mm. These sensors are configured to solve the challenges surgeons face during insertion through curvy major vessels such as the aortic arch. Moreover, these sensors deal with providing information on rubbing the walls and shape sensing. This study presents an experimental and mathematical models of the optical fiber sensors with 2 degrees of freedom. There are two eight gear-shaped tubes made up of 3D printed thermoplastic Polyurethane (TPU) material that are connected. The optical fiber sensors are mounted inside the first tube for protection from external light and used TPU material as a prototype for a catheter. The second tube is used as a flat reflection for the light intensity modulation-based optical fiber sensors. The first tube is attached to the linear guide for insertion and withdrawal purposes and can manually turn it 45° by manipulating the tube gear. A 3D hard material phantom was developed that mimics the aortic arch anatomy structure in which the test was carried out. During the insertion of the sensors into the 3D phantom, datasets are obtained in terms of voltage, distance, and position of the sensors. These datasets reflect the characteristics of light intensity modulation of the optical fiber sensors with a plane project of the aortic arch structure shape. Mathematical modeling of the light intensity was carried out based on the projection plane and experiment set-up. The performance of the system was evaluated in terms of its accuracy in navigating through the curvature and information on the position of the sensors by investigating 40 single insertions of the sensors into the 3D phantom. The experiment demonstrated that the sensors were effectively steered through the 3D phantom curvature and to desired target references in all 2 degrees of freedom. The performance of the sensors echoes the reflectance of light theory, where the smaller the radius of curvature, the more of the shining LED lights are reflected and received by the photodiode. A mathematical model results are in good agreement with the experiment result and the operation principle of the light intensity modulation of the optical fiber sensors. A prototype of a catheter using TPU material with three optical fiber sensors mounted inside has been developed that is capable of navigating through the different radius of curvature with 2 degrees of freedom. The proposed system supports operators with pre-scan data to make maneuverability and bendability through curvy major vessels easier, accurate, and safe. The mathematical modelling accurately fits the experiment result.Keywords: Intensity modulated optical fiber sensor, mathematical model, plane projection, shape sensing.
Procedia PDF Downloads 25255 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration
Authors: S. J. Addinell, T. Richard, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis
Procedia PDF Downloads 23054 New Findings on the Plasma Electrolytic Oxidation (PEO) of Aluminium
Authors: J. Martin, A. Nominé, T. Czerwiec, G. Henrion, T. Belmonte
Abstract:
The plasma electrolytic oxidation (PEO) is a particular electrochemical process to produce protective oxide ceramic coatings on light-weight metals (Al, Mg, Ti). When applied to aluminum alloys, the resulting PEO coating exhibit improved wear and corrosion resistance because thick, hard, compact and adherent crystalline alumina layers can be achieved. Several investigations have been carried out to improve the efficiency of the PEO process and one particular way consists in tuning the suitable electrical regime. Despite the considerable interest in this process, there is still no clear understanding of the underlying discharge mechanisms that make possible metal oxidation up to hundreds of µm through the ceramic layer. A key parameter that governs the PEO process is the numerous short-lived micro-discharges (micro-plasma in liquid) that occur continuously over the processed surface when the high applied voltage exceeds the critical dielectric breakdown value of the growing ceramic layer. By using a bipolar pulsed current to supply the electrodes, we previously observed that micro-discharges are delayed with respect to the rising edge of the anodic current. Nevertheless, explanation of the origin of such phenomena is still not clear and needs more systematic investigations. The aim of the present communication is to identify the relationship that exists between this delay and the mechanisms responsible of the oxide growth. For this purpose, the delay of micro-discharges ignition is investigated as the function of various electrical parameters such as the current density (J), the current pulse frequency (F) and the anodic to cathodic charge quantity ratio (R = Qp/Qn) delivered to the electrodes. The PEO process was conducted on Al2214 aluminum alloy substrates in a solution containing potassium hydroxide [KOH] and sodium silicate diluted in deionized water. The light emitted from micro-discharges was detected by a photomultiplier and the micro-discharge parameters (number, size, life-time) were measured during the process by means of ultra-fast video imaging (125 kfr./s). SEM observations and roughness measurements were performed to characterize the morphology of the elaborated oxide coatings while XRD was carried out to evaluate the amount of corundum -Al203 phase. Results show that whatever the applied current waveform, the delay of micro-discharge appearance increases as the process goes on. Moreover, the delay is shorter when the current density J (A/dm2), the current pulse frequency F (Hz) and the ratio of charge quantity R are high. It also appears that shorter delays are associated to stronger micro-discharges (localized, long and large micro-discharges) which have a detrimental effect on the elaborated oxide layers (thin and porous). On the basis of the results, a model for the growth of the PEO oxide layers will be presented and discussed. Experimental results support that a mechanism of electrical charge accumulation at the oxide surface / electrolyte interface takes place until the dielectric breakdown occurs and thus until micro-discharges appear.Keywords: aluminium, micro-discharges, oxidation mechanisms, plasma electrolytic oxidation
Procedia PDF Downloads 26453 Screening of Freezing Tolerance in Eucalyptus Genotypes (Eucalyptus spp.) Using Chlorophyll Fluorescence, Ionic Leakage, Proline Accumulation and Stomatal Density
Authors: S. Lahijanian, M. Mobli, B. Baninasab, N. Etemadi
Abstract:
Low temperature extremes are amongst the major stresses that adversely affect the plant growth and productivity. Cold stress causes oxidative stress, physiological, morphological and biochemical changes in plant cells. Generally, low temperatures similar to salinity and drought exert their negative effects mainly by disrupting the ionic and osmotic equilibrium of the plant cells. Changes in climatic condition leading to more frequent extreme conditions will require adapted crop species on a larger scale in order to sustain agricultural production. Eucalyptus is a diverse genus of flowering trees (and a few shrubs) in the myrtle family, Myrtaceae. Members of this genus dominate the tree flora of Australia. The eucalyptus genus contains more than 580 species and large number of cultivars, which are native to Australia. Large distribution and diversity of compatible eucalyptus cultivars reflect the fact of ecological flexibility of eucalyptus. Some eucalyptus cultivars can sustain hard environmental conditions like high and low temperature, salinity, high level of PH, drought, chilling and freezing which are intensively effective on crops with tropical and subtropical origin. In this study, we tried to evaluate freezing tolerance of 12 eucalyptus genotypes by means of four different morphological and physiological methods: Chlorophyll fluorescence, electrolyte leakage, proline and stomatal density. The studied cultivars include Eucalyptus camaldulensis, E. coccifera, E. darlympleana, E. erythrocorys, E. glaucescens, E. globulus, E. gunnii, E. macrocorpa, E. microtheca, E. rubida, E. tereticornis, and E. urnigera. Except for stomatal density recording, in other methods, plants were exposed to five gradual temperature drops: zero, -5, -10, -15 and -20 degree of centigrade and they remained in these temperatures for at least one hour. Experiment for measuring chlorophyll fluorescence showed that genotypes E. erythrocorys and E. camaldulensis were the most resistant genotypes and E. gunnii and E.coccifera were more sensitive than other genotypes to freezing stress effects. In electrolyte leakage experiment with regard to significant interaction between cultivar and temperature, genotypes E. erythrocorys and E.macrocorpa were shown to be the most tolerant genotypes and E. gunnii, E. urnigera, E. microtheca and E. tereticornis with the more ionic leakage percentage showed to be more sensitive to low temperatures. Results of Proline experiment approved that the most resistant genotype to freezing stress is E. erythrocorys. In the stomatal density experiment, the numbers of stomata under microscopic field were totally counted and the results showed that the E. erythrocorys and E. macrocorpa genotypes had the maximum and E. coccifera and E. darlympleana genotypes had minimum number of stomata under microscopic field (0.0605 mm2). In conclusion, E. erythrocorys identified as the most tolerant genotype; meanwhile E. gunnii classified as the most freezing susceptible genotype in this investigation. Further, remarkable correlation was not obtained between the stomatal density and other cold stress measures.Keywords: chlorophyll fluorescence, cold stress, ionic leakage, proline, stomatal density
Procedia PDF Downloads 26552 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools
Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla
Abstract:
The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.Keywords: curriculum, development, Arabic language, non-native, skills
Procedia PDF Downloads 8451 The Role of Metaheuristic Approaches in Engineering Problems
Authors: Ferzat Anka
Abstract:
Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems
Procedia PDF Downloads 7750 Design and Synthesis of an Organic Material with High Open Circuit Voltage of 1.0 V
Authors: Javed Iqbal
Abstract:
The growing need for energy by the human society and depletion of conventional energy sources demands a renewable, safe, infinite, low-cost and omnipresent energy source. One of the most suitable ways to solve the foreseeable world’s energy crisis is to use the power of the sun. Photovoltaic devices are especially of wide interest as they can convert solar energy to electricity. Recently the best performing solar cells are silicon-based cells. However, silicon cells are expensive, rigid in structure and have a large timeline for the payback of cost and electricity. Organic photovoltaic cells are cheap, flexible and can be manufactured in a continuous process. Therefore, organic photovoltaic cells are an extremely favorable replacement. Organic photovoltaic cells utilize sunlight as energy and convert it into electricity through the use of conductive polymers/ small molecules to separate electrons and electron holes. A major challenge for these new organic photovoltaic cells is the efficiency, which is low compared with the traditional silicon solar cells. To overcome this challenge, usually two straightforward strategies have been considered: (1) reducing the band-gap of molecular donors to broaden the absorption range, which results in higher short circuit current density (JSC) of devices, and (2) lowering the highest occupied molecular orbital (HOMO) energy of molecular donors so as to increase the open-circuit voltage (VOC) of applications devices.8 Keeping in mind the cost of chemicals it is hard to try many materials on test basis. The best way is to find the suitable material in the bulk. For this purpose, we use computational approach to design molecules based on our organic chemistry knowledge and determine their physical and electronic properties. In this study, we did DFT calculations with different options to get high open circuit voltage and after getting suitable data from calculation we finally did synthesis of a novel D–π–A–π–D type low band-gap small molecular donor material (ZOPTAN-TPA). The Aarylene vinylene based bis(arylhalide) unit containing a cyanostilbene unit acts as a low-band- gap electron-accepting block, and is coupled with triphenylamine as electron-donating blocks groups. The motivation for choosing triphenylamine (TPA) as capped donor was attributed to its important role in stabilizing the separated hole from an exciton and thus improving the hole-transporting properties of the hole carrier.3 A π-bridge (thiophene) is inserted between the donor and acceptor unit to reduce the steric hindrance between the donor and acceptor units and to improve the planarity of the molecule. The ZOPTAN-TPA molecule features a low HOMO level of 5.2 eV and an optical energy gap of 2.1 eV. Champion OSCs based on a solution-processed and non-annealed active-material blend of [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) and ZOPTAN-TPA in a mass ratio of 2:1 exhibits a power conversion efficiency of 1.9 % and a high open-circuit voltage of over 1.0 V.Keywords: high open circuit voltage, donor, triphenylamine, organic solar cells
Procedia PDF Downloads 24149 Superparamagnetic Core Shell Catalysts for the Environmental Production of Fuels from Renewable Lignin
Authors: Cristina Opris, Bogdan Cojocaru, Madalina Tudorache, Simona M. Coman, Vasile I. Parvulescu, Camelia Bala, Bahir Duraki, Jeroen A. Van Bokhoven
Abstract:
The tremendous achievements in the development of the society concretized by more sophisticated materials and systems are merely based on non-renewable resources. Consequently, after more than two centuries of intensive development, among others, we are faced with the decrease of the fossil fuel reserves, an increased impact of the greenhouse gases on the environment, and economic effects caused by the fluctuations in oil and mineral resource prices. The use of biomass may solve part of these problems, and recent analyses demonstrated that from the perspective of the reduction of the emissions of carbon dioxide, its valorization may bring important advantages conditioned by the usage of genetic modified fast growing trees or wastes, as primary sources. In this context, the abundance and complex structure of lignin may offer various possibilities of exploitation. However, its transformation in fuels or chemicals supposes a complex chemistry involving the cleavage of C-O and C-C bonds and altering of the functional groups. Chemistry offered various solutions in this sense. However, despite the intense work, there are still many drawbacks limiting the industrial application. Thus, the proposed technologies considered mainly homogeneous catalysts meaning expensive noble metals based systems that are hard to be recovered at the end of the reaction. Also, the reactions were carried out in organic solvents that are not acceptable today from the environmental point of view. To avoid these problems, the concept of this work was to investigate the synthesis of superparamagnetic core shell catalysts for the fragmentation of lignin directly in the aqueous phase. The magnetic nanoparticles were covered with a nanoshell of an oxide (niobia) with a double role: to protect the magnetic nanoparticles and to generate a proper (acidic) catalytic function and, on this composite, cobalt nanoparticles were deposed in order to catalyze the C-C bond splitting. With this purpose, we developed a protocol to prepare multifunctional and magnetic separable nano-composite Co@Nb2O5@Fe3O4 catalysts. We have also established an analytic protocol for the identification and quantification of the fragments resulted from lignin depolymerization in both liquid and solid phase. The fragmentation of various lignins occurred on the prepared materials in high yields and with very good selectivity in the desired fragments. The optimization of the catalyst composition indicated a cobalt loading of 4wt% as optimal. Working at 180 oC and 10 atm H2 this catalyst allowed a conversion of lignin up to 60% leading to a mixture containing over 96% in C20-C28 and C29-C37 fragments that were then completely fragmented to C12-C16 in a second stage. The investigated catalysts were completely recyclable, and no leaching of the elements included in the composition was determined by inductively coupled plasma optical emission spectrometry (ICP-OES).Keywords: superparamagnetic core-shell catalysts, environmental production of fuels, renewable lignin, recyclable catalysts
Procedia PDF Downloads 32848 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft
Authors: Saurabh Sharma
Abstract:
Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete
Procedia PDF Downloads 6647 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 27646 Scanning Transmission Electron Microscopic Analysis of Gamma Ray Exposed Perovskite Solar Cells
Authors: Aleksandra Boldyreva, Alexander Golubnichiy, Artem Abakumov
Abstract:
Various perovskite materials have surprisingly high resistance towards high-energy electrons, protons, and hard ionization, such as X-rays and gamma-rays. Superior radiation hardness makes a family of perovskite semiconductors an attractive candidate for single- and multijunction solar cells for the space environment and as X-ray and gamma-ray detectors. One of the methods to study the radiation hardness of different materials is by exposing them to gamma photons with high energies (above 500 keV) Herein, we have explored the recombination dynamics and defect concentration of a mixed cation mixed halide perovskite Cs0.17FA0.83PbI1.8Br1.2 with 1.74 eV bandgap after exposure to a gamma-ray source (2.5 Gy/min). We performed an advanced STEM EDX analysis to reveal different types of defects formed during gamma exposure. It was found that 10 kGy dose results in significant improvement of perovskite crystallinity and homogeneous distribution of I ions. While the absorber layer withstood gamma exposure, the hole transport layer (PTAA) as well as indium tin oxide (ITO) were significantly damaged, which increased the interface recombination rate and reduction of fill factor in solar cells. Thus, STEM analysis is a powerful technique that can reveal defects formed by gamma exposure in perovskite solar cells. Methods: Data will be collected from perovskite solar cells (PSCs) and thin films exposed to gamma ionisator. For thin films 50 μL of the Cs0.17FA0.83PbI1.8Br1.2 solution in DMF was deposited (dynamically) at 3000 rpm followed by quenching with 100 μL of ethyl acetate (dropped 10 sec after perovskite precursor) applied at the same spin-coating frequency. The deposited Cs0.17FA0.83PbI1.8Br1.2 films were annealed for 10 min at 100 °C, which led to the development of a dark brown color. For the solar cells, 10% suspension of SnO2 nanoparticles (Alfa Aesar) was deposited at 4000 rpm, followed by annealing on air at 170 ˚C for 20 min. Next, samples were introduced into a nitrogen glovebox for the deposition of all remaining layers. Perovskite film was applied in the same way as in thin films described earlier. Solution of poly-triaryl amine PTAA (Sigma Aldrich) (4 mg in chlorobenzene) was applied at 1000 rpm atop of perovskite layer. Next, 30 nm of VOx was deposited atop the PTAA layer on the whole sample surface using the physical vapor deposition (PVD) technique. Silver electrodes (100 nm) were evaporated in a high vacuum (10-6 mbar) through a shadow mask, defining the active area of each device as ~0.16 cm2. The prepared samples (thin films and solar cells) were packed in Al lamination foil inside the argon glove box. The set of samples consisted of 6 thin films and 6 solar cells, which were exposed to 6, 10, and 21 kGy (2 samples per dose) with 137Cs gamma-ray source (E = 662 keV) with a dose rate of 2.5 Gy/min. The exposed samples will be studied on a focused ion beam (FIB) on a dual-beam scanning electron microscope from ThermoFisher, the Helios G4 Plasma FIB Uxe, operating with a xenon plasma.Keywords: perovskite solar cells, transmission electron microscopy, radiation hardness, gamma irradiation
Procedia PDF Downloads 2445 We Have Never Seen a Dermatologist. Prisons Telederma Project Reaching the Unreachable Through Teledermatology
Authors: Innocent Atuhe, Babra Nalwadda, Grace Mulyowa, Annabella Habinka Ejiri
Abstract:
Background: Atopic Dermatitis (AD) is one of the most prevalent and growing chronic inflammatory skin diseases in African prisons. AD care is limited in African due to a lack of information about the disease amongst primary care workers, limited access to dermatologists, lack of proper training of healthcare workers, and shortage of appropriate treatments. We designed and implemented the Prisons Telederma project based on the recommendations of the International Society of Atopic Dermatitis. We aimed at; i) increase awareness and understanding of teledermatology among prison health workers and ii) improve treatment outcomes of prisoners with atopic dermatitis through increased access to and utilization of consultant dermatologists through teledermatology in Uganda prisons. Approach: We used Store-and-forward Teledermatology (SAF-TD) to increase access to dermatologist-led care for prisoners and prison staff with AD. We conducted five days of training for prison health workers using an adapted WHO training guide on recognizing neglected tropical diseases through changes on the skin together with an adapted American Academy of Dermatology (AAD) Childhood AD Basic Dermatology Curriculum designed to help trainees develop a clinical approach to the evaluation and initial management of patients with AD. This training was followed by blended e-learning, webinars facilitated by consultant Dermatologists with local knowledge of medication and local practices, apps adjusted for pigmented skin, WhatsApp group discussions, and sharing pigmented skin AD pictures and treatment via zoom meetings. We hired a team of Ugandan Senior Consultant dermatologists to draft an iconographic atlas of the main dermatoses in pigmented African skin and shared this atlas with prison health staff for use as a job aid. We had planned to use MySkinSelfie mobile phone application to take and share skin pictures of prisoners with AD with Consultant Dermatologists, who would review the pictures and prescribe appropriate treatment. Unfortunately, the National Health Service withdrew the app from the market due to technical issues. We monitored and evaluated treatment outcomes using the Patient-Oriented Eczema Measure (POEM) tool. We held four advocacy meetings to persuade relevant stakeholders to increase supplies and availability of first-line AD treatments such as emollients in prison health facilities. Results: We have the very first iconographic atlas of the main dermatoses in pigmented African skin. We increased; i) the proportion of prison health staff with adequate knowledge of AD and teledermatology from 20% to 80%; ii) the proportion of prisoners with AD reporting improvement in disease severity (POEM scores) from 25% to 35% in one year; iii) increased proportion of prisoners with AD seen by consultant dermatologist through teledermatology from 0% to 20% in one year and iv)Increased the availability of AD recommended treatments in prisons health facilities from 5% to 10% in one year. Our study contributes to the use, evaluation, and verification of the use of teledermatology to increase access to specialist dermatology services to the most hard to reach areas and vulnerable populations such as that of prisoners.Keywords: teledermatology, prisoners, reaching, un-reachable
Procedia PDF Downloads 10144 Barriers to Business Model Innovation in the Agri-Food Industry
Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad
Abstract:
The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.Keywords: agri-food, barriers, business model, innovation
Procedia PDF Downloads 233