Search results for: hard ferrite
67 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System
Authors: Masoud Mirzaee, Ghobad Behzadi Pour
Abstract:
An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure
Procedia PDF Downloads 24966 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application
Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough
Abstract:
In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.Keywords: casting, cast iron, microstructure, heat treating
Procedia PDF Downloads 10565 A Negotiation Model for Understanding the Role of International Law in Foreign Policy Crises
Authors: William Casto
Abstract:
Studies that consider the actual impact of international law upon foreign affairs crises are flawed by an unrealistic model of decision making. The common, unexamined assumption is that a nation has a unitary executive or ruler who considers a wide variety of considerations, including international law, in attempting to resolve a crisis. To the extent that negotiation theory is considered, the focus is on negotiations between or among nations. The unsettling result is a shallow focus that concentrates on each country’s public posturing about international law. The country-to-country model ignores governments’ internal negotiations that lead to their formal position in a crisis. The model for foreign policy crises needs to be supplemented to include a model of internal negotiations. Important foreign policy decisions come from groups within a government committee, advisers, etc. Within these groups, participants may have differing agendas and resort to international law to bolster their positions. To understand the influence of international law in international crises, these internal negotiations must be considered. These negotiations are crucial to creating a foreign policy agenda or recommendations. External negotiations between the two nations are significant, but the internal negotiations provide a better understanding of the actual influence of international law upon international crises. Discovering the details of specific internal negotiations is quite difficult but not necessarily impossible. The present proposal will use a specific crisis to illustrate the role of international law. In 1861 during the American Civil War, a United States navy captain stopped a British mail ship and removed two ambassadors of the rebelling southern states. The result was what is commonly called the Trent Affair. In the wake of the captain’s unauthorized and rash action, Great Britain seriously considered going to war against the United States. A detailed analysis of the Trent Affair is possible using the available and extensive internal British correspondence and memoranda to reach an understanding of the effect of international law upon decision making. The extensive trove of internal British documents is particularly valuable because in 1861, the only effective means of communication was face-to-face or through letters. Telephones did not exist, and travel by horse and carriage was tedious. The British documents tell us how individual participants viewed the process. We can approach an accurate understanding of what actually happened as the British government strove to resolve the crisis. For example, British law officers initially concluded that the American captain’s rash act was permissible under international law. Later, the law officers revised their opinion. A model of internal negotiation is particularly valuable because it strips away nations’ public posturing about disputed international law principles. In internal decision making, there is room for meaningful debate over the relevant principles. This fluid debate tells how international law is used to develop a hard, public bargaining position. The Trent Affair indicates that international law had an actual influence upon the crisis and that law was not mere window dressing for the government’s public position.Keywords: foreign affairs crises, negotiation, international law, Trent affair
Procedia PDF Downloads 12764 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR
Authors: Ionut Vintu, Stefan Laible, Ruth Schulz
Abstract:
Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection
Procedia PDF Downloads 13963 An Investigation into Why Very Few Small Start-Ups Business Survive for Longer Than Three Years: An Explanatory Study in the Context of Saudi Arabia
Authors: Motaz Alsolaim
Abstract:
Nowadays, the challenges of running a start-up can be very complex and are perhaps more difficult than at any other time in the past. Changes in technology, manufacturing innovation, and product development, combined with intense competition and market regulations are factors that have put pressure on classic ways of managing firms, thereby forcing change. As a result, the rate of closure, exit or discontinuation of start-ups and young businesses is very high. Despite the essential role of small firms in an economy, they still tend to face obstacles that exert a negative influence on their performance and rate of survival. In fact, it is not easy to determine with any certainty the reasons why small firms fail. For this reason, failure itself is not clearly defined, and its exact causes are hard to diagnose. In this current study, therefore, the barriers to survival will be covered more broadly, especially personal/entrepreneurial, enterprise and environmental factors with regard to various possible reasons for this failure, in order to determine the best solutions and make appropriate recommendations. Methodology: It could be argued that mixed methods might help to improve entrepreneurship research addressing challenges emphasis in previous studies and to achieve the triangulation. Calls for the combined use of quantitative and qualitative research were also made in the entrepreneurship field since entrepreneurship is a multi-faceted area of research. Therefore, explanatory sequential mixed method was used, using questionnaire online survey for entrepreneurs, followed by semi-structure interview. Collecting over 750 surveys and accepting 296 valid surveys, after that 13 interviews from government official seniors, businessmen successful entrepreneurs, and non-successful entrepreneurs. Findings: The first phase findings ( quantitative) shows the obstacles to survive; starting from the personal/ entrepreneurial factors such as; past work experience, lack of skills and interest, are positive factors, while; gender, age and education level of the owner are negative factors. Internal factors such as lack of marketing research and weak business planning are positive. The environmental factors; in economic perspectives; difficulty to find labors, in socio-cultural perspectives; Social restriction and traditions found to be a negative factors. In other hand, from the political perspective; cost of compliance and insufficient government plans found to be a positive factors for small business failure. From infrastructure perspective; lack of skills labor, high level of bureaucracy and lack of information are positive factors. Conclusion: This paper serves to enrich the understanding of failure factors in MENA region more precisely in SA, by minimizing the probability of failure in small-micro entrepreneurial start-up in SA, in the light of the Saudi government’s Vision 2030 plan.Keywords: small business barriers, start-up business, entrepreneurship, Saudi Arabia
Procedia PDF Downloads 17762 Augmented Reality Enhanced Order Picking: The Potential for Gamification
Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis
Abstract:
Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification
Procedia PDF Downloads 15061 Globalisation and Diplomacy: How Can Small States Improve the Practice of Diplomacy to Secure Their Foreign Policy Objectives?
Authors: H. M. Ross-McAlpine
Abstract:
Much of what is written on diplomacy, globalization and the global economy addresses the changing nature of relationships between major powers. While the most dramatic and influential changes have resulted from these developing relationships the world is not, on deeper inspection, governed neatly by major powers. Due to advances in technology, the shifting balance of power and a changing geopolitical order, small states have the ability to exercise a greater influence than ever before. Increasingly interdependent and ever complex, our world is too delicate to be handled by a mighty few. The pressure of global change requires small states to adapt their diplomatic practices and diversify their strategic alliances and relationships. The nature and practice of diplomacy must be re-evaluated in light of the pressures resulting from globalization. This research examines: how small states can best secure their foreign policy objectives? Small state theory is used as a foundation for exploring the case study of New Zealand. The research draws on secondary sources to evaluate the existing theory in relation to modern practices of diplomacy. As New Zealand lacks the required economic and military power to play an active, influential role in international affairs what strategies are used to exert influence? Furthermore, New Zealand lies in a remote corner of the Pacific and is geographically isolated from its nearest neighbors how does this affect security and trade priorities? The findings note a significant shift since the 1970’s in New Zealand’s diplomatic relations. This shift is arguably a direct result of globalization, regionalism and a growing independence from the traditional bi-lateral relationships. The need to source predictable trade, investment and technology are an essential driving force for New Zealand’s diplomatic relations. A lack of hard power aligns New Zealand’s prosperity with a secure, rules-based international system that increases the likelihood of a stable and secure global order. New Zealand’s diplomacy and prosperity has been intrinsically reliant on its reputation. A vital component of New Zealand’s diplomacy is preserving a reputation for integrity and global responsibility. It is the use of this soft power that facilitates the influence that New Zealand enjoys on the world stage. To weave a comprehensive network of successful diplomatic relationships, New Zealand must maintain a reputation of international credibility. Globalization has substantially influenced the practice of diplomacy for New Zealand. The current world order places economic and military might in the hands of a few, subsequently requiring smaller states to use other means for securing their interests. There are clear strategies evident in New Zealand’s diplomacy practice that draw attention to how other smaller states might best secure their foreign policy objectives. While these findings are limited, as with all case study research, there is value in applying the findings to other small states struggling to secure their interests in the wake of rapid globalization.Keywords: diplomacy, foreign policy, globalisation, small state
Procedia PDF Downloads 39660 Navigating Complex Communication Dynamics in Qualitative Research
Authors: Kimberly M. Cacciato, Steven J. Singer, Allison R. Shapiro, Julianna F. Kamenakis
Abstract:
This study examines the dynamics of communication among researchers and participants who have various levels of hearing, use multiple languages, have various disabilities, and who come from different social strata. This qualitative methodological study focuses on the strategies employed in an ethnographic research study examining the communication choices of six sets of parents who have Deaf-Disabled children. The participating families varied in their communication strategies and preferences including the use of American Sign Language (ASL), visual-gestural communication, multiple spoken languages, and pidgin forms of each of these. The research team consisted of two undergraduate students proficient in ASL and a Deaf principal investigator (PI) who uses ASL and speech as his main modes of communication. A third Hard-of-Hearing undergraduate student fluent in ASL served as an objective facilitator of the data analysis. The team created reflexive journals by audio recording, free writing, and responding to team-generated prompts. They discussed interactions between the members of the research team, their evolving relationships, and various social and linguistic power differentials. The researchers reflected on communication during data collection, their experiences with one another, and their experiences with the participating families. Reflexive journals totaled over 150 pages. The outside research assistant reviewed the journals and developed follow up open-ended questions and prods to further enrich the data. The PI and outside research assistant used NVivo qualitative research software to conduct open inductive coding of the data. They chunked the data individually into broad categories through multiple readings and recognized recurring concepts. They compared their categories, discussed them, and decided which they would develop. The researchers continued to read, reduce, and define the categories until they were able to develop themes from the data. The research team found that the various communication backgrounds and skills present greatly influenced the dynamics between the members of the research team and with the participants of the study. Specifically, the following themes emerged: (1) students as communication facilitators and interpreters as barriers to natural interaction, (2) varied language use simultaneously complicated and enriched data collection, and (3) ASL proficiency and professional position resulted in a social hierarchy among researchers and participants. In the discussion, the researchers reflected on their backgrounds and internal biases of analyzing the data found and how social norms or expectations affected the perceptions of the researchers in writing their journals. Through this study, the research team found that communication and language skills require significant consideration when working with multiple and complex communication modes. The researchers had to continually assess and adjust their data collection methods to meet the communication needs of the team members and participants. In doing so, the researchers aimed to create an accessible research setting that yielded rich data but learned that this often required compromises from one or more of the research constituents.Keywords: American Sign Language, complex communication, deaf-disabled, methodology
Procedia PDF Downloads 11759 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 16058 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence
Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai
Abstract:
The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing
Procedia PDF Downloads 25257 Characterisation, Extraction of Secondary Metabolite from Perilla frutescens for Therapeutic Additives: A Phytogenic Approach
Authors: B. M. Vishal, Monamie Basu, Gopinath M., Rose Havilah Pulla
Abstract:
Though there are several methods of synthesizing silver nano particles, Green synthesis always has its own dignity. Ranging from the cost-effectiveness to the ease of synthesis, the process is simplified in the best possible way and is one of the most explored topics. This study of extracting secondary metabolites from Perilla frutescens and using them for therapeutic additives has its own significance. Unlike the other researches that have been done so far, this study aims to synthesize Silver nano particles from Perilla frutescens using three available forms of the plant: leaves, seed, and commercial leaf extract powder. Perilla frutescens, commonly known as 'Beefsteak Plant', is a perennial plant and belongs to the mint family. The plant has two varieties classed within itself. They are frutescens crispa and frutescens frutescens. The species, frutescens crispa (commonly known as 'Shisho' in Japanese), is generally used for edible purposes. Its leaves occur in two forms, varying on the colors. It is found in two different colors of red with purple streaks and green with crinkly pattern on it. This species is aromatic due to the presence of two major compounds: polyphenols and perillaldehyde. The red (purple streak) variety of this plant is due to the presence of a pigment, Perilla anthocyanin. The species, frutescens frutescens (commonly known as 'Egoma' in Japanese), is the main source for perilla oil. This species is also aromatic, but in this case, the major compound which gives the aroma is Perilla ketone or egoma ketone. Shisho grows short as compared with Wild Sesame and both produce seeds. The seeds of Wild Sesame are large and soft whereas that of Shisho is small and hard. The seeds have a large proportion of lipids, ranging about 38-45 percent. Excluding those, the seeds have a large quantity of Omega-3 fatty acids, linoleic acid, and an Omega-6 fatty acid. Other than these, Perilla leaf extract has gold and silver nano particles in it. The yield comparison in all the cases have been done, and the process’ optimal conditions were modified, keeping in mind the efficiencies. The characterization of secondary metabolites includes GC-MS and FTIR which can be used to identify the components of purpose that actually helps in synthesizing silver nano particles. The analysis of silver was done through a series of characterization tests that include XRD, UV-Vis, EDAX, and SEM. After the synthesis, for being used as therapeutic additives, the toxin analysis was done, and the results were tabulated. The synthesis of silver nano particles was done in a series of multiple cycles of extraction from leaves, seeds and commercially purchased leaf extract. The yield and efficiency comparison were done to bring out the best and the cheapest possible way of synthesizing silver nano particles using Perilla frutescens. The synthesized nano particles can be used in therapeutic drugs, which has a wide range of application from burn treatment to cancer treatment. This will, in turn, replace the traditional processes of synthesizing nano particles, as this method will prove effective in terms of cost and the environmental implications.Keywords: nanoparticles, green synthesis, Perilla frutescens, characterisation, toxin analysis
Procedia PDF Downloads 23356 Design of Smart Catheter for Vascular Applications Using Optical Fiber Sensor
Authors: Lamiek Abraham, Xinli Du, Yohan Noh, Polin Hsu, Tingting Wu, Tom Logan, Ifan Yen
Abstract:
In the field of minimally invasive, smart medical instruments such as catheters and guidewires are typically used at a remote distance to gain access to the diseased artery, often negotiating tortuous, complex, and diseased vessels in the process. Three optical fiber sensors with a diameter of 1.5mm each that are 120° apart from each other is proposed to be mounted into a catheter-based pump device with a diameter of 10mm. These sensors are configured to solve the challenges surgeons face during insertion through curvy major vessels such as the aortic arch. Moreover, these sensors deal with providing information on rubbing the walls and shape sensing. This study presents an experimental and mathematical models of the optical fiber sensors with 2 degrees of freedom. There are two eight gear-shaped tubes made up of 3D printed thermoplastic Polyurethane (TPU) material that are connected. The optical fiber sensors are mounted inside the first tube for protection from external light and used TPU material as a prototype for a catheter. The second tube is used as a flat reflection for the light intensity modulation-based optical fiber sensors. The first tube is attached to the linear guide for insertion and withdrawal purposes and can manually turn it 45° by manipulating the tube gear. A 3D hard material phantom was developed that mimics the aortic arch anatomy structure in which the test was carried out. During the insertion of the sensors into the 3D phantom, datasets are obtained in terms of voltage, distance, and position of the sensors. These datasets reflect the characteristics of light intensity modulation of the optical fiber sensors with a plane project of the aortic arch structure shape. Mathematical modeling of the light intensity was carried out based on the projection plane and experiment set-up. The performance of the system was evaluated in terms of its accuracy in navigating through the curvature and information on the position of the sensors by investigating 40 single insertions of the sensors into the 3D phantom. The experiment demonstrated that the sensors were effectively steered through the 3D phantom curvature and to desired target references in all 2 degrees of freedom. The performance of the sensors echoes the reflectance of light theory, where the smaller the radius of curvature, the more of the shining LED lights are reflected and received by the photodiode. A mathematical model results are in good agreement with the experiment result and the operation principle of the light intensity modulation of the optical fiber sensors. A prototype of a catheter using TPU material with three optical fiber sensors mounted inside has been developed that is capable of navigating through the different radius of curvature with 2 degrees of freedom. The proposed system supports operators with pre-scan data to make maneuverability and bendability through curvy major vessels easier, accurate, and safe. The mathematical modelling accurately fits the experiment result.Keywords: Intensity modulated optical fiber sensor, mathematical model, plane projection, shape sensing.
Procedia PDF Downloads 25255 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration
Authors: S. J. Addinell, T. Richard, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis
Procedia PDF Downloads 22954 New Findings on the Plasma Electrolytic Oxidation (PEO) of Aluminium
Authors: J. Martin, A. Nominé, T. Czerwiec, G. Henrion, T. Belmonte
Abstract:
The plasma electrolytic oxidation (PEO) is a particular electrochemical process to produce protective oxide ceramic coatings on light-weight metals (Al, Mg, Ti). When applied to aluminum alloys, the resulting PEO coating exhibit improved wear and corrosion resistance because thick, hard, compact and adherent crystalline alumina layers can be achieved. Several investigations have been carried out to improve the efficiency of the PEO process and one particular way consists in tuning the suitable electrical regime. Despite the considerable interest in this process, there is still no clear understanding of the underlying discharge mechanisms that make possible metal oxidation up to hundreds of µm through the ceramic layer. A key parameter that governs the PEO process is the numerous short-lived micro-discharges (micro-plasma in liquid) that occur continuously over the processed surface when the high applied voltage exceeds the critical dielectric breakdown value of the growing ceramic layer. By using a bipolar pulsed current to supply the electrodes, we previously observed that micro-discharges are delayed with respect to the rising edge of the anodic current. Nevertheless, explanation of the origin of such phenomena is still not clear and needs more systematic investigations. The aim of the present communication is to identify the relationship that exists between this delay and the mechanisms responsible of the oxide growth. For this purpose, the delay of micro-discharges ignition is investigated as the function of various electrical parameters such as the current density (J), the current pulse frequency (F) and the anodic to cathodic charge quantity ratio (R = Qp/Qn) delivered to the electrodes. The PEO process was conducted on Al2214 aluminum alloy substrates in a solution containing potassium hydroxide [KOH] and sodium silicate diluted in deionized water. The light emitted from micro-discharges was detected by a photomultiplier and the micro-discharge parameters (number, size, life-time) were measured during the process by means of ultra-fast video imaging (125 kfr./s). SEM observations and roughness measurements were performed to characterize the morphology of the elaborated oxide coatings while XRD was carried out to evaluate the amount of corundum -Al203 phase. Results show that whatever the applied current waveform, the delay of micro-discharge appearance increases as the process goes on. Moreover, the delay is shorter when the current density J (A/dm2), the current pulse frequency F (Hz) and the ratio of charge quantity R are high. It also appears that shorter delays are associated to stronger micro-discharges (localized, long and large micro-discharges) which have a detrimental effect on the elaborated oxide layers (thin and porous). On the basis of the results, a model for the growth of the PEO oxide layers will be presented and discussed. Experimental results support that a mechanism of electrical charge accumulation at the oxide surface / electrolyte interface takes place until the dielectric breakdown occurs and thus until micro-discharges appear.Keywords: aluminium, micro-discharges, oxidation mechanisms, plasma electrolytic oxidation
Procedia PDF Downloads 26453 Screening of Freezing Tolerance in Eucalyptus Genotypes (Eucalyptus spp.) Using Chlorophyll Fluorescence, Ionic Leakage, Proline Accumulation and Stomatal Density
Authors: S. Lahijanian, M. Mobli, B. Baninasab, N. Etemadi
Abstract:
Low temperature extremes are amongst the major stresses that adversely affect the plant growth and productivity. Cold stress causes oxidative stress, physiological, morphological and biochemical changes in plant cells. Generally, low temperatures similar to salinity and drought exert their negative effects mainly by disrupting the ionic and osmotic equilibrium of the plant cells. Changes in climatic condition leading to more frequent extreme conditions will require adapted crop species on a larger scale in order to sustain agricultural production. Eucalyptus is a diverse genus of flowering trees (and a few shrubs) in the myrtle family, Myrtaceae. Members of this genus dominate the tree flora of Australia. The eucalyptus genus contains more than 580 species and large number of cultivars, which are native to Australia. Large distribution and diversity of compatible eucalyptus cultivars reflect the fact of ecological flexibility of eucalyptus. Some eucalyptus cultivars can sustain hard environmental conditions like high and low temperature, salinity, high level of PH, drought, chilling and freezing which are intensively effective on crops with tropical and subtropical origin. In this study, we tried to evaluate freezing tolerance of 12 eucalyptus genotypes by means of four different morphological and physiological methods: Chlorophyll fluorescence, electrolyte leakage, proline and stomatal density. The studied cultivars include Eucalyptus camaldulensis, E. coccifera, E. darlympleana, E. erythrocorys, E. glaucescens, E. globulus, E. gunnii, E. macrocorpa, E. microtheca, E. rubida, E. tereticornis, and E. urnigera. Except for stomatal density recording, in other methods, plants were exposed to five gradual temperature drops: zero, -5, -10, -15 and -20 degree of centigrade and they remained in these temperatures for at least one hour. Experiment for measuring chlorophyll fluorescence showed that genotypes E. erythrocorys and E. camaldulensis were the most resistant genotypes and E. gunnii and E.coccifera were more sensitive than other genotypes to freezing stress effects. In electrolyte leakage experiment with regard to significant interaction between cultivar and temperature, genotypes E. erythrocorys and E.macrocorpa were shown to be the most tolerant genotypes and E. gunnii, E. urnigera, E. microtheca and E. tereticornis with the more ionic leakage percentage showed to be more sensitive to low temperatures. Results of Proline experiment approved that the most resistant genotype to freezing stress is E. erythrocorys. In the stomatal density experiment, the numbers of stomata under microscopic field were totally counted and the results showed that the E. erythrocorys and E. macrocorpa genotypes had the maximum and E. coccifera and E. darlympleana genotypes had minimum number of stomata under microscopic field (0.0605 mm2). In conclusion, E. erythrocorys identified as the most tolerant genotype; meanwhile E. gunnii classified as the most freezing susceptible genotype in this investigation. Further, remarkable correlation was not obtained between the stomatal density and other cold stress measures.Keywords: chlorophyll fluorescence, cold stress, ionic leakage, proline, stomatal density
Procedia PDF Downloads 26552 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools
Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla
Abstract:
The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.Keywords: curriculum, development, Arabic language, non-native, skills
Procedia PDF Downloads 8451 The Role of Metaheuristic Approaches in Engineering Problems
Authors: Ferzat Anka
Abstract:
Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems
Procedia PDF Downloads 7750 Design and Synthesis of an Organic Material with High Open Circuit Voltage of 1.0 V
Authors: Javed Iqbal
Abstract:
The growing need for energy by the human society and depletion of conventional energy sources demands a renewable, safe, infinite, low-cost and omnipresent energy source. One of the most suitable ways to solve the foreseeable world’s energy crisis is to use the power of the sun. Photovoltaic devices are especially of wide interest as they can convert solar energy to electricity. Recently the best performing solar cells are silicon-based cells. However, silicon cells are expensive, rigid in structure and have a large timeline for the payback of cost and electricity. Organic photovoltaic cells are cheap, flexible and can be manufactured in a continuous process. Therefore, organic photovoltaic cells are an extremely favorable replacement. Organic photovoltaic cells utilize sunlight as energy and convert it into electricity through the use of conductive polymers/ small molecules to separate electrons and electron holes. A major challenge for these new organic photovoltaic cells is the efficiency, which is low compared with the traditional silicon solar cells. To overcome this challenge, usually two straightforward strategies have been considered: (1) reducing the band-gap of molecular donors to broaden the absorption range, which results in higher short circuit current density (JSC) of devices, and (2) lowering the highest occupied molecular orbital (HOMO) energy of molecular donors so as to increase the open-circuit voltage (VOC) of applications devices.8 Keeping in mind the cost of chemicals it is hard to try many materials on test basis. The best way is to find the suitable material in the bulk. For this purpose, we use computational approach to design molecules based on our organic chemistry knowledge and determine their physical and electronic properties. In this study, we did DFT calculations with different options to get high open circuit voltage and after getting suitable data from calculation we finally did synthesis of a novel D–π–A–π–D type low band-gap small molecular donor material (ZOPTAN-TPA). The Aarylene vinylene based bis(arylhalide) unit containing a cyanostilbene unit acts as a low-band- gap electron-accepting block, and is coupled with triphenylamine as electron-donating blocks groups. The motivation for choosing triphenylamine (TPA) as capped donor was attributed to its important role in stabilizing the separated hole from an exciton and thus improving the hole-transporting properties of the hole carrier.3 A π-bridge (thiophene) is inserted between the donor and acceptor unit to reduce the steric hindrance between the donor and acceptor units and to improve the planarity of the molecule. The ZOPTAN-TPA molecule features a low HOMO level of 5.2 eV and an optical energy gap of 2.1 eV. Champion OSCs based on a solution-processed and non-annealed active-material blend of [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) and ZOPTAN-TPA in a mass ratio of 2:1 exhibits a power conversion efficiency of 1.9 % and a high open-circuit voltage of over 1.0 V.Keywords: high open circuit voltage, donor, triphenylamine, organic solar cells
Procedia PDF Downloads 23949 Superparamagnetic Core Shell Catalysts for the Environmental Production of Fuels from Renewable Lignin
Authors: Cristina Opris, Bogdan Cojocaru, Madalina Tudorache, Simona M. Coman, Vasile I. Parvulescu, Camelia Bala, Bahir Duraki, Jeroen A. Van Bokhoven
Abstract:
The tremendous achievements in the development of the society concretized by more sophisticated materials and systems are merely based on non-renewable resources. Consequently, after more than two centuries of intensive development, among others, we are faced with the decrease of the fossil fuel reserves, an increased impact of the greenhouse gases on the environment, and economic effects caused by the fluctuations in oil and mineral resource prices. The use of biomass may solve part of these problems, and recent analyses demonstrated that from the perspective of the reduction of the emissions of carbon dioxide, its valorization may bring important advantages conditioned by the usage of genetic modified fast growing trees or wastes, as primary sources. In this context, the abundance and complex structure of lignin may offer various possibilities of exploitation. However, its transformation in fuels or chemicals supposes a complex chemistry involving the cleavage of C-O and C-C bonds and altering of the functional groups. Chemistry offered various solutions in this sense. However, despite the intense work, there are still many drawbacks limiting the industrial application. Thus, the proposed technologies considered mainly homogeneous catalysts meaning expensive noble metals based systems that are hard to be recovered at the end of the reaction. Also, the reactions were carried out in organic solvents that are not acceptable today from the environmental point of view. To avoid these problems, the concept of this work was to investigate the synthesis of superparamagnetic core shell catalysts for the fragmentation of lignin directly in the aqueous phase. The magnetic nanoparticles were covered with a nanoshell of an oxide (niobia) with a double role: to protect the magnetic nanoparticles and to generate a proper (acidic) catalytic function and, on this composite, cobalt nanoparticles were deposed in order to catalyze the C-C bond splitting. With this purpose, we developed a protocol to prepare multifunctional and magnetic separable nano-composite Co@Nb2O5@Fe3O4 catalysts. We have also established an analytic protocol for the identification and quantification of the fragments resulted from lignin depolymerization in both liquid and solid phase. The fragmentation of various lignins occurred on the prepared materials in high yields and with very good selectivity in the desired fragments. The optimization of the catalyst composition indicated a cobalt loading of 4wt% as optimal. Working at 180 oC and 10 atm H2 this catalyst allowed a conversion of lignin up to 60% leading to a mixture containing over 96% in C20-C28 and C29-C37 fragments that were then completely fragmented to C12-C16 in a second stage. The investigated catalysts were completely recyclable, and no leaching of the elements included in the composition was determined by inductively coupled plasma optical emission spectrometry (ICP-OES).Keywords: superparamagnetic core-shell catalysts, environmental production of fuels, renewable lignin, recyclable catalysts
Procedia PDF Downloads 32848 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft
Authors: Saurabh Sharma
Abstract:
Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete
Procedia PDF Downloads 6647 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 27646 Scanning Transmission Electron Microscopic Analysis of Gamma Ray Exposed Perovskite Solar Cells
Authors: Aleksandra Boldyreva, Alexander Golubnichiy, Artem Abakumov
Abstract:
Various perovskite materials have surprisingly high resistance towards high-energy electrons, protons, and hard ionization, such as X-rays and gamma-rays. Superior radiation hardness makes a family of perovskite semiconductors an attractive candidate for single- and multijunction solar cells for the space environment and as X-ray and gamma-ray detectors. One of the methods to study the radiation hardness of different materials is by exposing them to gamma photons with high energies (above 500 keV) Herein, we have explored the recombination dynamics and defect concentration of a mixed cation mixed halide perovskite Cs0.17FA0.83PbI1.8Br1.2 with 1.74 eV bandgap after exposure to a gamma-ray source (2.5 Gy/min). We performed an advanced STEM EDX analysis to reveal different types of defects formed during gamma exposure. It was found that 10 kGy dose results in significant improvement of perovskite crystallinity and homogeneous distribution of I ions. While the absorber layer withstood gamma exposure, the hole transport layer (PTAA) as well as indium tin oxide (ITO) were significantly damaged, which increased the interface recombination rate and reduction of fill factor in solar cells. Thus, STEM analysis is a powerful technique that can reveal defects formed by gamma exposure in perovskite solar cells. Methods: Data will be collected from perovskite solar cells (PSCs) and thin films exposed to gamma ionisator. For thin films 50 μL of the Cs0.17FA0.83PbI1.8Br1.2 solution in DMF was deposited (dynamically) at 3000 rpm followed by quenching with 100 μL of ethyl acetate (dropped 10 sec after perovskite precursor) applied at the same spin-coating frequency. The deposited Cs0.17FA0.83PbI1.8Br1.2 films were annealed for 10 min at 100 °C, which led to the development of a dark brown color. For the solar cells, 10% suspension of SnO2 nanoparticles (Alfa Aesar) was deposited at 4000 rpm, followed by annealing on air at 170 ˚C for 20 min. Next, samples were introduced into a nitrogen glovebox for the deposition of all remaining layers. Perovskite film was applied in the same way as in thin films described earlier. Solution of poly-triaryl amine PTAA (Sigma Aldrich) (4 mg in chlorobenzene) was applied at 1000 rpm atop of perovskite layer. Next, 30 nm of VOx was deposited atop the PTAA layer on the whole sample surface using the physical vapor deposition (PVD) technique. Silver electrodes (100 nm) were evaporated in a high vacuum (10-6 mbar) through a shadow mask, defining the active area of each device as ~0.16 cm2. The prepared samples (thin films and solar cells) were packed in Al lamination foil inside the argon glove box. The set of samples consisted of 6 thin films and 6 solar cells, which were exposed to 6, 10, and 21 kGy (2 samples per dose) with 137Cs gamma-ray source (E = 662 keV) with a dose rate of 2.5 Gy/min. The exposed samples will be studied on a focused ion beam (FIB) on a dual-beam scanning electron microscope from ThermoFisher, the Helios G4 Plasma FIB Uxe, operating with a xenon plasma.Keywords: perovskite solar cells, transmission electron microscopy, radiation hardness, gamma irradiation
Procedia PDF Downloads 2445 We Have Never Seen a Dermatologist. Prisons Telederma Project Reaching the Unreachable Through Teledermatology
Authors: Innocent Atuhe, Babra Nalwadda, Grace Mulyowa, Annabella Habinka Ejiri
Abstract:
Background: Atopic Dermatitis (AD) is one of the most prevalent and growing chronic inflammatory skin diseases in African prisons. AD care is limited in African due to a lack of information about the disease amongst primary care workers, limited access to dermatologists, lack of proper training of healthcare workers, and shortage of appropriate treatments. We designed and implemented the Prisons Telederma project based on the recommendations of the International Society of Atopic Dermatitis. We aimed at; i) increase awareness and understanding of teledermatology among prison health workers and ii) improve treatment outcomes of prisoners with atopic dermatitis through increased access to and utilization of consultant dermatologists through teledermatology in Uganda prisons. Approach: We used Store-and-forward Teledermatology (SAF-TD) to increase access to dermatologist-led care for prisoners and prison staff with AD. We conducted five days of training for prison health workers using an adapted WHO training guide on recognizing neglected tropical diseases through changes on the skin together with an adapted American Academy of Dermatology (AAD) Childhood AD Basic Dermatology Curriculum designed to help trainees develop a clinical approach to the evaluation and initial management of patients with AD. This training was followed by blended e-learning, webinars facilitated by consultant Dermatologists with local knowledge of medication and local practices, apps adjusted for pigmented skin, WhatsApp group discussions, and sharing pigmented skin AD pictures and treatment via zoom meetings. We hired a team of Ugandan Senior Consultant dermatologists to draft an iconographic atlas of the main dermatoses in pigmented African skin and shared this atlas with prison health staff for use as a job aid. We had planned to use MySkinSelfie mobile phone application to take and share skin pictures of prisoners with AD with Consultant Dermatologists, who would review the pictures and prescribe appropriate treatment. Unfortunately, the National Health Service withdrew the app from the market due to technical issues. We monitored and evaluated treatment outcomes using the Patient-Oriented Eczema Measure (POEM) tool. We held four advocacy meetings to persuade relevant stakeholders to increase supplies and availability of first-line AD treatments such as emollients in prison health facilities. Results: We have the very first iconographic atlas of the main dermatoses in pigmented African skin. We increased; i) the proportion of prison health staff with adequate knowledge of AD and teledermatology from 20% to 80%; ii) the proportion of prisoners with AD reporting improvement in disease severity (POEM scores) from 25% to 35% in one year; iii) increased proportion of prisoners with AD seen by consultant dermatologist through teledermatology from 0% to 20% in one year and iv)Increased the availability of AD recommended treatments in prisons health facilities from 5% to 10% in one year. Our study contributes to the use, evaluation, and verification of the use of teledermatology to increase access to specialist dermatology services to the most hard to reach areas and vulnerable populations such as that of prisoners.Keywords: teledermatology, prisoners, reaching, un-reachable
Procedia PDF Downloads 10144 Barriers to Business Model Innovation in the Agri-Food Industry
Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad
Abstract:
The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.Keywords: agri-food, barriers, business model, innovation
Procedia PDF Downloads 23243 Evaluation of Microstructure, Mechanical and Abrasive Wear Response of in situ TiC Particles Reinforced Zinc Aluminum Matrix Alloy Composites
Authors: Mohammad M. Khan, Pankaj Agarwal
Abstract:
The present investigation deals with the microstructures, mechanical and detailed wear characteristics of in situ TiC particles reinforced zinc aluminum-based metal matrix composites. The composites have been synthesized by liquid metallurgy route using vortex technique. The composite was found to be harder than the matrix alloy due to high hardness of the dispersoid particles therein. The former was also lower in ultimate tensile strength and ductility as compared to the matrix alloy. This could be explained to be due to the use of coarser size dispersoid and larger interparticle spacing. Reasonably uniform distribution of the dispersoid phase in the alloy matrix and good interfacial bonding between the dispersoid and matrix was observed. The composite exhibited predominantly brittle mode of fracture with microcracking in the dispersoid phase indicating effective easy transfer of load from matrix to the dispersoid particles. To study the wear behavior of the samples three different types of tests were performed namely: (i) sliding wear tests using a pin on disc machine under dry condition, (ii) high stress (two-body) abrasive wear tests using different combinations of abrasive media and specimen surfaces under the conditions of varying abrasive size, traversal distance and load, and (iii) low-stress (three-body) abrasion tests using a rubber wheel abrasion tester at various loads and traversal distances using different abrasive media. In sliding wear test, significantly lower wear rates were observed in the case of base alloy over that of the composites. This has been attributed to the poor room temperature strength as a result of increased microcracking tendency of the composite over the matrix alloy. Wear surfaces of the composite revealed the presence of fragmented dispersoid particles and microcracking whereas the wear surface of matrix alloy was observed to be smooth with shallow grooves. During high-stress abrasion, the presence of the reinforcement offered increased resistance to the destructive action of the abrasive particles. Microcracking tendency was also enhanced because of the reinforcement in the matrix. The negative effect of the microcracking tendency was predominant by the abrasion resistance of the dispersoid. As a result, the composite attained improved wear resistance than the matrix alloy. The wear rate increased with load and abrasive size due to a larger depth of cut made by the abrasive medium. The wear surfaces revealed fine grooves, and damaged reinforcement particles while subsurface regions revealed limited plastic deformation and microcracking and fracturing of the dispersoid phase. During low-stress abrasion, the composite experienced significantly less wear rate than the matrix alloy irrespective of the test conditions. This could be explained to be due to wear resistance offered by the hard dispersoid phase thereby protecting the softer matrix against the destructive action of the abrasive medium. Abraded surfaces of the composite showed protrusion of dispersoid phase. The subsurface regions of the composites exhibited decohesion of the dispersoid phase along with its microcracking and limited plastic deformation in the vicinity of the abraded surfaces.Keywords: abrasive wear, liquid metallurgy, metal martix composite, SEM
Procedia PDF Downloads 15042 A Rural Journey of Integrating Interprofessional Education to Foster Trust
Authors: Julia Wimmers Klick
Abstract:
Interprofessional Education (IPE) is widely recognized as a valuable approach in healthcare education, despite the challenges it presents. This study explores IP surface anatomy lab sessions, with a focus on fostering trust and collaboration among healthcare students. The research is conducted within the context of rural healthcare settings in British Columbia (BC), where a medical school and a physical therapy (PT) program operate under the Faculty of Medicine at the University of British Columbia (UBC). While IPE sessions addressing soft skills have been implemented, the integration of hard skills, such as Anatomy, remains limited. To address this gap, a pilot feasibility study was conducted with a positive outcome, a follow-up study involved these IPE sessions aimed at exploring the influence of bonding and trust between medical and PT students. Data were collected through focus groups comprising participating students and faculty members, and a structured SWOC (Strengths, Weaknesses, Opportunities, and Challenges) analysis was conducted. The IPE sessions, 3 in total, consisted of a 2.5-hour lab on surface anatomy, where PT students took on the teaching role, and medical students were newly exposed to surface anatomy. The focus of the study was on the relationship-building process and trust development between the two student groups, rather than assessing the acquisition of surface anatomy skills. Results indicated that the surface anatomy lab served as a suitable tool for the application and learning of soft skills. Faculty members observed positive outcomes, including productive interaction between students, reversed hierarchy with PT students teaching medical students, practicing active listening skills, and using a mutual language of anatomy. Notably, there was no grade assessment or external pressure to perform. The students also reported an overall positive experience; however, the specific impact on the development of soft skill competencies could not be definitively determined. Participants expressed a sense of feeling respected, welcomed, and included, all of which contributed to feeling safe. Within the small group environment, students experienced becoming a part of a community of healthcare providers that bonded over a shared interest in health professions education. They enjoyed sharing diverse experiences related to learning across their varied contexts, without fear of judgment and reprisal that were often intimidating in single professional contexts. During a joint Christmas party for both cohorts, faculty members observed students mingling, laughing, and forming bonds. This emphasized the importance of early bonding and trust development among healthcare colleagues, particularly in rural settings. In conclusion, the findings emphasize the potential of IPE sessions to enhance trust and collaboration among healthcare students, with implications for their future professional lives in rural settings. Early bonding and trust development are crucial in rural settings, where healthcare professionals often rely on each other. Future research should continue to explore the impact of content-concentrated IPE on the development of soft skill competencies.Keywords: interprofessional education, rural healthcare settings, trust, surface anatomy
Procedia PDF Downloads 6941 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 35040 Masstige and the New Luxury: An Exploratory Study on Cosmetic Brands Among Black African Woman
Authors: Melanie Girdharilall, Anjli Himraj, Shivan Bhagwandin, Marike Venter De Villiers
Abstract:
The allure of luxury has long been attractive, fashionable, mystifying, and complex. As globalisation and the popularity of social media continue to evolve, consumers are seeking status products. However, in emerging economies like South Africa, where 60% of the country lives in poverty, this desire is often far-fetched and out of reach to most of the consumers. As a result, luxury brands are introducing masstige products: products that are associated with luxury and status but within financial reach to the middle-class consumer. The biggest challenge that this industry faces is the lack of knowledge and expertise on black female’s hair composition and offering products that meet their intricate requirements. African consumers have unique hair types, and global brands often do not accommodate for the complex nature of their hair and their product needs. By gaining insight into this phenomenon, global cosmetic brands can benefit from brand expansion, product extensions, increased brand awareness, brand knowledge, and brand equity. The purpose of this study is to determine how cosmetic brands can leverage the concept of masstige products to cater to the needs of middle-income black African woman. This study explores the 18- to 35-year-old black female cohort, which comprises approximately 17% of the South African population. The black hair care industry in Africa is expected a 6% growth rate over the next 5 years. The study is grounded in Paul’s (2019) 3-phase model for masstige marketing. This model demonstrates that product, promotion, and place strategies play a significant role in masstige value creation and the impact of these strategies on the branding dimensions (brand trust, brand association, brand positioning, brand preference, etc.).More specifically, this theoretical framework encompasses nine stages, or dimensions, that are of critical importance to companies who plan to infiltrate the masstige market. In short, the most critical components to consider are the positioning of the product and its competitive advantage in comparison to competitors. Secondly, advertising appeals and use of celebrities, and lastly, distribution channels such as online or in-store while maintain the exclusivity of the brand. By means of an exploratory study, a qualitative approach was undertaken, and focus groups were conducted among black African woman. The focus groups were voice recorded, transcribed, and analysed using Atlas software. The main themes were identified and used to provide brands with insight and direction for developing a comprehensive marketing mix for effectively entering the masstige market. The findings of this study will provide marketing practitioners with in-depth insight into how to effectively position masstige brands in line with consumer needs. It will give direction to both existing and new brands aiming to enter this market, by giving a comprehensive marketing mix for targeting the growing black hair care industry in Africa.Keywords: africa, masstige, cosmetics, hard care, black females
Procedia PDF Downloads 8539 Ordered Mesoporous Carbons of Different Morphology for Loading and Controlled Release of Active Pharmaceutical Ingredients
Authors: Aleksander Ejsmont, Aleksandra Galarda, Joanna Goscianska
Abstract:
Smart porous carriers with defined structure and physicochemical properties are required for releasing the therapeutic drug with precise control of delivery time and location in the body. Due to their non-toxicity, ordered structure, chemical, and thermal stability, mesoporous carbons can be considered as modern carriers for active pharmaceutical ingredients (APIs) whose effectiveness needs frequent dosing algorithms. Such an API-carrier system, if programmed precisely, may stabilize the pharmaceutical and increase its dissolution leading to enhanced bioavailability. The substance conjugated with the material, through its prior adsorption, can later be successfully applied internally to the organism, as well as externally if the API release is feasible under these conditions. In the present study, ordered mesoporous carbons of different morphologies and structures, prepared by hard template method, were applied as carriers in the adsorption and controlled release of active pharmaceutical ingredients. In the first stage, the carbon materials were synthesized and functionalized with carboxylic groups by chemical oxidation using ammonium persulfate solution and then with amine groups. Materials obtained were thoroughly characterized with respect to morphology (scanning electron microscopy), structure (X-ray diffraction, transmission electron microscopy), characteristic functional groups (FT-IR spectroscopy), acid-base nature of surface groups (Boehm titration), parameters of the porous structure (low-temperature nitrogen adsorption) and thermal stability (TG analysis). This was followed by a series of tests of adsorption and release of paracetamol, benzocaine, and losartan potassium. Drug release experiments were performed in the simulated gastric fluid of pH 1.2 and phosphate buffer of pH 7.2 or 6.8 at 37.0 °C. The XRD patterns in the small-angle range and TEM images revealed that functionalization of mesoporous carbons with carboxylic or amine groups leads to the decreased ordering of their structure. Moreover, the modification caused a considerable reduction of the carbon-specific surface area and pore volume, but it simultaneously resulted in changing their acid-base properties. Mesoporous carbon materials exhibit different morphologies, which affect the host-guest interactions during the adsorption process of active pharmaceutical ingredients. All mesoporous carbons show high adsorption capacity towards drugs. The sorption capacity of materials is mainly affected by BET surface area and the structure/size matching between adsorbent and adsorbate. Selected APIs are linked to the surface of carbon materials mainly by hydrogen bonds, van der Waals forces, and electrostatic interactions. The release behavior of API is highly dependent on the physicochemical properties of mesoporous carbons. The release rate of APIs could be regulated by the introduction of functional groups and by changing the pH of the receptor medium. Acknowledgments—This research was supported by the National Science Centre, Poland (project SONATA-12 no: 2016/23/D/NZ7/01347).Keywords: ordered mesoporous carbons, sorption capacity, drug delivery, carbon nanocarriers
Procedia PDF Downloads 17638 Development of a Conceptual Framework for Supply Chain Management Strategies Maximizing Resilience in Volatile Business Environments: A Case of Ventilator Challenge UK
Authors: Elena Selezneva
Abstract:
Over the last two decades, an unprecedented growth in uncertainty and volatility in all aspects of the business environment has caused major global supply chain disruptions and malfunctions. The effects of one failed company in a supply chain can ripple up and down the chain, causing a number of entities or an entire supply chain to collapse. The complicating factor is that an increasingly unstable and unpredictable business environment fuels the growing complexity of global supply chain networks. That makes supply chain operations extremely unpredictable and hard to manage with the established methods and strategies. It has caused the premature demise of many companies around the globe as they could not withstand or adapt to the storm of change. Solutions to this problem are not easy to come by. There is a lack of new empirically tested theories and practically viable supply chain resilience strategies. The mainstream organizational approach to managing supply chain resilience is rooted in well-established theories developed in the 1960-1980s. However, their effectiveness is questionable in currently extremely volatile business environments. The systems thinking approach offers an alternative view of supply chain resilience. Still, it is very much in the development stage. The aim of this explorative research is to investigate supply chain management strategies that are successful in taming complexity in volatile business environments and creating resilience in supply chains. The design of this research methodology was guided by an interpretivist paradigm. A literature review informed the selection of the systems thinking approach to supply chain resilience. Therefore, an explorative single case study of Ventilator Challenge UK was selected as a case study for its extremely resilient performance of its supply chain during a period of national crisis. Ventilator Challenge UK is intensive care ventilators supply project for the NHS. It ran for 3.5 months and finished in 2020. The participants moved on with their lives, and most of them are not employed by the same organizations anymore. Therefore, the study data includes documents, historical interviews, live interviews with participants, and social media postings. The data analysis was accomplished in two stages. First, data were thematically analyzed. In the second stage, pattern matching and pattern identification were used to identify themes that formed the findings of the research. The findings from the Ventilator Challenge UK case study supply management practices demonstrated all the features of an adaptive dynamic system. They cover all the elements of supply chain and employ an entire arsenal of adaptive dynamic system strategies enabling supply chain resilience. Also, it is not a simple sum of parts and strategies. Bonding elements and connections between the components of a supply chain and its environment enabled the amplification of resilience in the form of systemic emergence. Enablers are categorized into three subsystems: supply chain central strategy, supply chain operations, and supply chain communications. Together, these subsystems and their interconnections form the resilient supply chain system framework conceptualized by the author.Keywords: enablers of supply chain resilience, supply chain resilience strategies, systemic approach in supply chain management, resilient supply chain system framework, ventilator challenge UK
Procedia PDF Downloads 81