Search results for: optimization of operation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5601

Search results for: optimization of operation

171 Improving the Budget Distribution Procedure to Ensure Smooth and Efficient Public Service Delivery

Authors: Rizwana Tabassum

Abstract:

Introductive Statement: Delay in budget releases is often cited as one of the biggest bottlenecks to smooth and efficient service delivery. While budget release from the ministry of finance to the line ministries has been expedited by simplifying the procedure, budget distribution within the line ministries remains one of the major causes of slow budget utilization. While the budget preparation is a bottom-up process where all DDOs submit their proposals to their controlling officers (such as Upazila Civil Surgeon sends it to Director General Health), who consolidate the budget proposals in iBAS++ budget preparation module, the approved budget is not disaggregated by all DDOs. Instead, it is left to the discretion of the controlling officers to distribute the approved budget to their sub-ordinate offices over the course of the year. Though there are some need-based criteria/formulae to distribute the approved budget among DDOs in some sectors, there is little evidence that these criteria are actually used. This means that majority of the DDOs don’t know their yearly allocations upfront to enable yearly planning of activities and expenditures. This delays the implementation of critical activities and the payment to the suppliers of goods and services and sometimes leads to undocumented arrears to suppliers for essential goods/services. In addition, social sector budgets are fragmented because of the vertical programs and externally financed interventions that pose several management challenges at the level of the budget holders and frontline service providers. Slow procurement processes further delay the provision of necessary goods and services. For example, it takes an average of 15–18 months for drugs to reach the Upazila Health Complex and below, while it should not take more than 9 months in procuring and distributing these. Aim of the Study: This paper aims to investigate the budget distribution practices of an emerging economy, Bangladesh. The paper identifies challenges of timely distribution and ways to deal with problems as well. Methodology: The study draws conclusions on the basis of document analysis which is a branch of the qualitative research method. Major Findings: Upon approval of the National Budget, the Ministry of Finance is required to distribute the budget to budget holders at the department level; however, budget is distributed to drawing and disbursing officers much later. Conclusions: Timely and predictable budget releases assist completion of development schemes on time and on budget, with sufficient recurrent resources for effective operation. ADP implementation is usually very low at the beginning of the fiscal year and expedited dramatically during the last few months, leading to inefficient use of resources. The timely budget release will resolve this issue and deliver economic benefits faster, better, and more reliably. This will also give the project directors/DDOs the freedom to think and plan the budget execution in a predictable manner, thereby ensuring value for money by reducing time overrun and expediting the completion of capital investments, and improving infrastructure utilization through timely payment of recurrent costs.

Keywords: budget distribution, challenges, digitization, emerging economy, service delivery

Procedia PDF Downloads 53
170 A Geoprocessing Tool for Early Civil Work Notification to Optimize Fiber Optic Cable Installation Cost

Authors: Hussain Adnan Alsalman, Khalid Alhajri, Humoud Alrashidi, Abdulkareem Almakrami, Badie Alguwaisem, Said Alshahrani, Abdullah Alrowaished

Abstract:

Most of the cost of installing a new fiber optic cable is attributed to civil work-trenching-cost. In many cases, information technology departments receive project proposals in their eReview system, but not all projects are visible to everyone. Additionally, if there was no IT scope in the proposed project, it is not likely to be visible to IT. Sometimes it is too late to add IT scope after project budgets have been finalized. Finally, the eReview system is a repository of PDF files for each project, which commits the reviewer to manual work and limits automation potential. This paper details a solution to address the late notification of the eReview system by integrating IT Sites GIS data-sites locations-with land use permit (LUP) data-civil work activity, which is the first step before securing the required land usage authorizations and means no detailed designs for any relevant project before an approved LUP request. To address the manual nature of eReview system, both the LUP System and IT data are using ArcGIS Desktop, which enables the creation of a geoprocessing tool with either Python or Model Builder to automate finding and evaluating potentially usable LUP requests to reduce trenching between two sites in need of a new FOC. To achieve this, a weekly dump was taken from LUP system production data and loaded manually onto ArcMap Desktop. Then a custom tool was developed in model builder, which consisted of a table of two columns containing all the pairs of sites in need of new fiber connectivity. The tool then iterates all rows of this table, taking the sites’ pair one at a time and finding potential LUPs between them, which satisfies the provided search radius. If a group of LUPs was found, an iterator would go through each LUP to find the required civil work between the two sites and the LUP Polyline feature and the distance through the line, which would be counted as cost avoidance if an IT scope had been added. Finally, the tool will export an Excel file named with sites pair, and it will contain as many rows as the number of LUPs, which met the search radius containing trenching and pulling information and cost. As a result, multiple projects have been identified – historical, missed opportunity, and proposed projects. For the proposed project, the savings were about 75% ($750,000) to install a new fiber with the Euclidean distance between Abqaiq GOSP2 and GOSP3 DCOs. In conclusion, the current tool setup identifies opportunities to bundle civil work on single projects at a time and between two sites. More work is needed to allow the bundling of multiple projects between two sites to achieve even more cost avoidance in both capital cost and carbon footprint.

Keywords: GIS, fiber optic cable installation optimization, eliminate redundant civil work, reduce carbon footprint for fiber optic cable installation

Procedia PDF Downloads 196
169 The Use of Flipped Classroom as a Teaching Method in a Professional Master's Program in Network, in Brazil

Authors: Carla Teixeira, Diana Azevedo, Jonatas Bessa, Maria Guilam

Abstract:

The flipped classroom is a blended learning modality that combines face-to-face and virtual activities of self-learning, mediated by digital information and communication technologies, which reverses traditional teaching approaches and presents, as a presupposition, the previous study of contents by students. In the following face-to-face activities, the contents are discussed, producing active learning. This work aims to describe the systematization process of the use of flipped classrooms as a method to develop complementary national activities in PROFSAÚDE, a professional master's program in the area of public health, offered as a distance learning course, in the network, in Brazil. The complementary national activities were organized with the objective of strengthening and qualifying students´ learning process. The network gathers twenty-two public institutions of higher education in the country. Its national coordination conducted a survey to detect complementary educational needs, supposed to improve the formative process and align important content sums for the program nationally. The activities were organized both asynchronously, making study materials available in Google classrooms, and synchronously in a tele presential way, organized on virtual platforms to reach the largest number of students in the country. The asynchronous activities allowed each student to study at their own pace and the synchronous activities were intended for deepening and reflecting on the themes. The national team identified some professors' areas of expertise, who were contacted for the production of audiovisual content such as video classes and podcasts, guidance for supporting bibliographic materials and also to conduct synchronous activities together with the technical team. The contents posted in the virtual classroom were organized by modules and made available before the synchronous meeting; these modules, in turn, contain “pills of experience” that correspond to reports of teachers' experiences in relation to the different themes. In addition, activity was proposed, with questions aimed to expose doubts about the contents and a learning challenge, as a practical exercise. Synchronous activities are built with different invited teachers, based on the participants 'discussions, and are the forum where teachers can answer students' questions, providing feedback on the learning process. At the end of each complementary activity, an evaluation questionnaire is available. The responses analyses show that this institutional network experience, as pedagogical innovation, provides important tools to support teaching and research due to its potential in the participatory construction of learning, optimization of resources, the democratization of knowledge and sharing and strengthening of practical experiences on the network. One of its relevant aspects was the thematic diversity addressed through this method.

Keywords: active learning, flipped classroom, network education experience, pedagogic innovation

Procedia PDF Downloads 136
168 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 122
167 Techno Economic Analysis of CAES Systems Integrated into Gas-Steam Combined Plants

Authors: Coriolano Salvini

Abstract:

The increasing utilization of renewable energy sources for electric power production calls for the introduction of energy storage systems to match the electric demand along the time. Although many countries are pursuing as a final goal a “decarbonized” electrical system, in the next decades the traditional fossil fuel fed power plant still will play a relevant role in fulfilling the electric demand. Presently, such plants provide grid ancillary services (frequency control, grid balance, reserve, etc.) by adapting the output power to the grid requirements. An interesting option is represented by the possibility to use traditional plants to improve the grid storage capabilities. The present paper is addressed to small-medium size systems suited for distributed energy storage. The proposed Energy Storage System (ESS) is based on a Compressed Air Energy Storage (CAES) integrated into a Gas-Steam Combined Cycle (GSCC) or a Gas Turbine based CHP plants. The systems can be incorporated in an ex novo built plant or added to an already existing one. To avoid any geological restriction related to the availability of natural compressed air reservoirs, artificial storage is addressed. During the charging phase, electric power is absorbed from the grid by an electric driven intercooled/aftercooled compressor. In the course of the discharge phase, the compressed stored air is sent to a heat transfer device fed by hot gas taken upstream the Heat Recovery Steam Generator (HRSG) and subsequently expanded for power production. To maximize the output power, a staged reheated expansion process is adopted. The specific power production related to the kilogram per second of exhaust gas used to heat the stored air is two/three times larger than that achieved if the gas were used to produce steam in the HRSG. As a result, a relevant power augmentation is attained with respect to normal GSCC plant operations without additional use of fuel. Therefore, the excess of output power can be considered “fuel free” and the storage system can be compared to “pure” ESSs such as electrochemical, pumped hydro or adiabatic CAES. Representative cases featured by different power absorption, production capability, and storage capacity have been taken into consideration. For each case, a technical optimization aimed at maximizing the storage efficiency has been carried out. On the basis of the resulting storage pressure and volume, number of compression and expansion stages, air heater arrangement and process quantities found for each case, a cost estimation of the storage systems has been performed. Storage efficiencies from 0.6 to 0.7 have been assessed. Capital costs in the range of 400-800 €/kW and 500-1000 €/kWh have been estimated. Such figures are similar or lower to those featuring alternative storage technologies.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), gas steam combined cycle (GSCC), techno-economic analysis

Procedia PDF Downloads 188
166 From Forked Tongues to Tinkerbell Ears: Rethinking the Criminalization of Alternative Body Modification in the UK

Authors: Luci V. Hyett

Abstract:

The criminal law of England and Wales currently deems that a person cannot consent to the infliction of injury upon their own body, where the level of harm is considered to be Actual or Grevious. This renders the defence of consent of the victim as being unavailable to those persons carrying out an Alternative Body Modification procedure. However, the criminalization of consensual injury is more appropriately deemed as being categorized as an offense against public morality and not one against the person, which renders the State’s involvement in the autonomous choices of a consenting adult, when determining what can be done to one’s own body, an arbitrary one. Furthermore, to recognise in law that a person is capable of giving a valid consent to socially acceptable cosmetic interventions that largely consist of procedures designed to aesthetically please men and, not those of people who want to modify their bodies for other reasons means that patriarchal attitudes are continuing to underpin public repulsion and inhibit social acceptance of such practices. Theoretical analysis will begin with a juridical examination of R v M(B) [2019] QB 1 where the High Court determined that Alternative Body Modification was not a special category exempting a person so performing from liability for Grevious Bodily Harm using the defence of consent. It will draw from its reasoning which considered that ‘the removal of body parts were medical procedures being carried out for no medical reason by someone not qualified to carry them out’ which will form the basis of this enquiry. It will consider the philosophical work of Georgio Agamben when analysing whether the biopolitical climate in the UK, which places the optimization of the perfect, healthy body at the centre of political concern can explain why those persons who wish to engage in Alternative Body Modification are treated as the ‘Exception’ to that which is normal using the ‘no medical reason’ canon to justify criminalisation, rather than legitimising the industry through regulation. It will consider, through a feminist lens, the current conflict in law between traditional cosmetic interventions which alter one’s physical appearance for socially accepted aesthetic purposes such as those to the breast, lip and buttock and, modifications described as more outlandish such as earlobe stretching, tooth filing and transdermal implants to create horns and spikes under the skin. It will assert that ethical principles relating to the psychological impact of body modification described as ‘alternative’ is used as a means to exclude person’s seeking such a procedure from receiving safe and competent treatment via a registered cosmetic surgeon which leads to these increasingly popular surgery’s being performed in Tattoo parlours throughout the UK as an extension to other socially acceptable forms of self-modification such as piercings. It will contend that only by ‘inclusive exclusion’ will those ‘othered’ through ostracisation be welcomed into the fold of normality and this can only be achieved through recognition of alternative body modification as a legitimate cosmetic intervention, subject to the same regulatory framework as existing practice. This would assist in refocusing the political landscape by erring on the side of liberty rather than that of biology.

Keywords: biopolitics, body modification, consent, criminal law

Procedia PDF Downloads 86
165 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 36
164 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 230
163 Impact of Urban Migration on Caste: Rohinton Mistry’s a Fine Balance and Rural-to-Urban Caste Migration in India

Authors: Mohua Dutta

Abstract:

The primary aim of this research paper is to investigate the forced urban migration of Dalits in India who are fleeing caste persecution in rural areas. This paper examines the relationship between caste and rural-to-urban internal migration in India using a literary text, Rohinton Mistry’s A Fine Balance, highlighting the challenges faced by Dalits in rural areas that force them to migrate to urban areas. Despite the prevalence of such discussions in Dalit autobiographies written in vernacular languages, there is a lack of discussion regarding caste migration in Indian English Literature, including this present text, as evidenced by the existing critical interpretations of the novel, which this paper seeks to rectify. The primary research question is how urban migration affects caste system in India and why rural-to-urban caste migration occurs. The purpose of this paper is to better understand the reasons for Dalit migration, the challenges they face in rural and urban areas, and the lingering influence of caste in both rural and urban areas. The study reveals that the promise of mobility and emancipation provided by class operations drives rural-to-urban caste migration in India, but it also reveals that caste marginalization in rural areas is closely linked to class marginalization and other forms of subalternity in urban areas. Moreover, the caste system persists in urban areas as well, making Dalit migrants more vulnerable to social, political, and economic discrimination. The reason for this is that, despite changes in profession and urban migration, the trapped structure of caste capital and family networks exposes migrants to caste and class oppressions. To reach its conclusion, this study employs a variety of methodologies. Discourse analysis is used to investigate the current debates and narratives surrounding caste migration. Critical race theory, specifically intersectional theory and social constructivism, aids in comprehending the complexities of caste, class, and migration. Mistry's novel is subjected to textual analysis in order to identify and interpret references to caste migration. Secondary data, such as theoretical understanding of the caste system in operation and scholarly works on caste migration, are also used to support and strengthen the findings and arguments presented in the paper. The study concludes that rural-to-urban caste migration in India is primarily motivated by the promise of socioeconomic mobility and emancipation offered by urban spaces. However, the caste system persists in urban areas, resulting in the continued marginalisation and discrimination of Dalit migrants. The study also highlights the limitations of urban migration in providing true emancipation for Dalit migrants, as they remain trapped within caste and family network structures. Overall, the study raises awareness of the complexities surrounding caste migration and its impact on the lives of India's marginalised communities. This study contributes to the field of Migration Studies by shedding light on an often-overlooked issue: Dalit migration. It challenges existing literary critical interpretations by emphasising the significance of caste migration in Indian English Literature. The study also emphasises the interconnectedness of caste and class, broadening understanding of how these systems function in both rural and urban areas.

Keywords: rural-to-urban caste migration in india, internal migration in india, caste system in india, dalit movement in india, rooster coop of caste and class, urban poor as subalterns

Procedia PDF Downloads 35
162 Digital Transformation in Fashion System Design: Tools and Opportunities

Authors: Margherita Tufarelli, Leonardo Giliberti, Elena Pucci

Abstract:

The fashion industry's interest in virtuality is linked, on the one hand, to the emotional and immersive possibilities of digital resources and the resulting languages and, on the other, to the greater efficiency that can be achieved throughout the value chain. The interaction between digital innovation and deep-rooted manufacturing traditions today translates into a paradigm shift for the entire fashion industry where, for example, the traditional values of industrial secrecy and know-how give way to experimentation in an open as well as participatory way, and the complete emancipation of virtual reality from actual 'reality'. The contribution aims to investigate the theme of digitisation in the Italian fashion industry, analysing its opportunities and the criticalities that have hindered its diffusion. There are two reasons why the most common approach in the fashion sector is still analogue: (i) the fashion product lives in close contact with the human body, so the sensory perception of materials plays a central role in both the use and the design of the product, but current technology is not able to restore the sense of touch; (ii) volumes are obtained by stitching flat surfaces that once assembled, given the flexibility of the material, can assume almost infinite configurations. Managing the fit and styling of virtual garments involves a wide range of factors, including mechanical simulation, collision detection, and user interface techniques for garment creation. After briefly reviewing some of the salient historical milestones in the resolution of problems related to the digital simulation of deformable materials and the user interface for the procedures for the realisation of the clothing system, the paper will describe the operation and possibilities offered today by the latest generation of specialised software. Parametric avatars and digital sartorial approach; drawing tools optimised for pattern making; materials both from the point of view of simulated physical behaviour and of aesthetic performance, tools for checking wearability, renderings, but also tools and procedures useful to companies both for dialogue with prototyping software and machinery and for managing the archive and the variants to be made. The article demonstrates how developments in technology and digital procedures now make it possible to intervene in different stages of design in the fashion industry. An integrated and additive process in which the constructed 3D models are usable both in the prototyping and communication of physical products and in the possible exclusively digital uses of 3D models in the new generation of virtual spaces. Mastering such tools requires the acquisition of specific digital skills and, at the same time, traditional skills for the design of the clothing system, but the benefits are manifold and applicable to different business dimensions. We are only at the beginning of the global digital transformation: the emergence of new professional figures and design dynamics leaves room for imagination, but in addition to applying digital tools to traditional procedures, traditional fashion know-how needs to be transferred into emerging digital practices to ensure the continuity of the technical-cultural heritage beyond the transformation.

Keywords: digital fashion, digital technology and couture, digital fashion communication, 3D garment simulation

Procedia PDF Downloads 42
161 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking

Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz

Abstract:

Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.

Keywords: anatomy, e-learning, virtual reality, 3D model marking

Procedia PDF Downloads 64
160 Performance Optimization of Polymer Materials Thanks to Sol-Gel Chemistry for Fuel Cells

Authors: Gondrexon, Gonon, Mendil-Jakani, Mareau

Abstract:

Proton Exchange Membrane Fuel Cells (PEMFCs) seems to be a promising device used for converting hydrogen into electricity. PEMFC is made of a Membrane Electrode Assembly (MEA) composed of a Proton Exchange Membrane (PEM) sandwiched by two catalytic layers. Nowadays, specific performances are targeted in order to ensure the long-term expansion of this technology. Current polymers used (perfluorinated as Nafion®) are unsuitable (loss of mechanical properties) for the high-temperature range. To overcome this issue, sulfonated polyaromatic polymers appear to be a good alternative since it has very good thermomechanical properties. However, their proton conductivity and chemical stability (oxidative resistance to H2O2 formed during fuel cell (FC) operating) are very low. In our team, we patented an original concept of hybrid membranes able to fulfill the specific requirements for PEMFC. This idea is based on the improvement of commercialized polymer membrane via an easy and processable stabilization thanks to sol-gel (SG) chemistry with judicious embeded chemical functions. This strategy is thus breaking up with traditional approaches (design of new copolymers, use of inorganic charges/additives). In 2020, we presented the elaboration and functional properties of a 1st generation of hybrid membranes with promising performances and durability. The latter was made by self-condensing a SG phase with 3(mercaptopropyl)trimethoxysilane (MPTMS) inside a commercial sPEEK host membrane. The successful in-situ condensation reactions of the MPTMS was demonstrated by measures of mass uptakes, FTIR spectroscopy (presence of C-Haliphatics) and solid state NMR 29Si (T2 & T3 signals of self-condensation products). The ability of the SG phase to prevent the oxidative degradation of the sPEEK phase (thanks to thiol chemical functions) was then proved with H2O2 accelerating tests and FC operating tests. A 2nd generation made of thiourea functionalized SG precursors (named HTU & TTU) was made after. By analysing in depth the morphologies of these different hybrids by direct space analysis (AFM/SEM/TEM) and reciprocal space analysis (SANS/SAXS/WAXS), we highlighted that both SG phase morphology and its localisation into the host has a huge impact on the PEM functional properties observed. This relationship is also dependent on the chemical function embedded. The hybrids obtained have shown very good chemical resistance during aging test (exposed to H2O2) compared to the commercial sPEEK. But the chemical function used is considered as “sacrificial” and cannot react indefinitely with H2O2. Thus, we are now working on a 3rd generation made of both sacrificial/regenerative chemical functions which are expected to inhibit the chemical aging of sPEEK more efficiently. With this work, we are confident to reach a predictive approach of the key parameters governing the final properties.

Keywords: fuel cells, ionomers, membranes, sPEEK, chemical stability

Procedia PDF Downloads 45
159 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 232
158 ReactorDesign App: An Interactive Software for Self-Directed Explorative Learning

Authors: Chia Wei Lim, Ning Yan

Abstract:

The subject of reactor design, dealing with the transformation of chemical feedstocks into more valuable products, constitutes the central idea of chemical engineering. Despite its importance, the way it is taught to chemical engineering undergraduates has stayed virtually the same over the past several decades, even as the chemical industry increasingly leans towards the use of software for the design and daily monitoring of chemical plants. As such, there has been a widening learning gap as chemical engineering graduates transition from university to the industry since they are not exposed to effective platforms that relate the fundamental concepts taught during lectures to industrial applications. While the success of technology enhanced learning (TEL) has been demonstrated in various chemical engineering subjects, TELs in the teaching of reactor design appears to focus on the simulation of reactor processes, as opposed to arguably more important ideas such as the selection and optimization of reactor configuration for different types of reactions. This presents an opportunity for us to utilize the readily available easy-to-use MATLAB App platform to create an educational tool to aid the learning of fundamental concepts of reactor design and to link these concepts to the industrial context. Here, interactive software for the learning of reactor design has been developed to narrow the learning gap experienced by chemical engineering undergraduates. Dubbed the ReactorDesign App, it enables students to design reactors involving complex design equations for industrial applications without being overly focused on the tedious mathematical steps. With the aid of extensive visualization features, the concepts covered during lectures are explicitly utilized, allowing students to understand how these fundamental concepts are applied in the industrial context and equipping them for their careers. In addition, the software leverages the easily accessible MATLAB App platform to encourage self-directed learning. It is useful for reinforcing concepts taught, complementing homework assignments, and aiding exam revision. Accordingly, students are able to identify any lapses in understanding and clarify them accordingly. In terms of the topics covered, the app incorporates the design of different types of isothermal and non-isothermal reactors, in line with the lecture content and industrial relevance. The main features include the design of single reactors, such as batch reactors (BR), continuously stirred tank reactors (CSTR), plug flow reactors (PFR), and recycle reactors (RR), as well as multiple reactors consisting of any combination of ideal reactors. A version of the app, together with some guiding questions to aid explorative learning, was released to the undergraduates taking the reactor design module. A survey was conducted to assess its effectiveness, and an overwhelmingly positive response was received, with 89% of the respondents agreeing or strongly agreeing that the app has “helped [them] with understanding the unit” and 87% of the respondents agreeing or strongly agreeing that the app “offers learning flexibility”, compared to the conventional lecture-tutorial learning framework. In conclusion, the interactive ReactorDesign App has been developed to encourage self-directed explorative learning of the subject and demonstrate the industrial applications of the taught design concepts.

Keywords: explorative learning, reactor design, self-directed learning, technology enhanced learning

Procedia PDF Downloads 71
157 Characteristics of Plasma Synthetic Jet Actuator in Repetitive Working Mode

Authors: Haohua Zong, Marios Kotsonis

Abstract:

Plasma synthetic jet actuator (PSJA) is a new concept of zero net mass flow actuator which utilizes pulsed arc/spark discharge to rapidly pressurize gas in a small cavity under constant-volume conditions. The unique combination of high exit jet velocity (>400 m/s) and high actuation frequency (>5 kHz) provides a promising solution for high-speed high-Reynolds-number flow control. This paper focuses on the performance of PSJA in repetitive working mode which is more relevant to future flow control applications. A two-electrodes PSJA (cavity volume: 424 mm3, orifice diameter: 2 mm) together with a capacitive discharge circuit (discharge energy: 50 mJ-110 mJ) is designed to enable repetitive operation. Time-Resolved Particle Imaging Velocimetry (TR-PIV) system working at 10 kHz is exploited to investigate the influence of discharge frequency on performance of PSJA. In total, seven cases are tested, covering a wide range of discharge frequencies (20 Hz-560 Hz). The pertinent flow features (shock wave, vortex ring and jet) remain the same for single shot mode and repetitive working mode. Shock wave is issued prior to jet eruption. Two distinct vortex rings are formed in one cycle. The first one is produced by the starting jet whereas the second one is related with the shock wave reflection in cavity. A sudden pressure rise is induced at the throat inlet by the reflection of primary shock wave, promoting the shedding of second vortex ring. In one cycle, jet exit velocity first increases sharply, then decreases almost linearly. Afterwards, an alternate occurrence of multiple jet stages and refresh stages is observed. By monitoring the dynamic evolution of exit velocity in one cycle, some integral performance parameters of PSJA can be deduced. As frequency increases, the jet intensity in steady phase decreases monotonically. In the investigated frequency range, jet duration time drops from 250 µs to 210 µs and peak jet velocity decreases from 53 m/s to approximately 39 m/s. The jet impulse and the expelled gas mass (0.69 µN∙s and 0.027 mg at 20 Hz) decline by 48% and 40%, respectively. However, the electro-mechanical efficiency of PSJA defined by the ratio of jet mechanical energy to capacitor energy doesn’t show significant difference (o(0.01%)). Fourier transformation of the temporal exit velocity signal indicates two dominant frequencies. One corresponds to the discharge frequency, while the other accounts for the alternation frequency of jet stage and refresh stage in one cycle. The alternation period (300 µs approximately) is independent of discharge frequency, and possibly determined intrinsically by the actuator geometry. A simple analytical model is established to interpret the alternation of jet stage and refresh stage. Results show that the dynamic response of exit velocity to a small-scale disturbance (jump in cavity pressure) can be treated as a second-order under-damping system. Oscillation frequency of the exit velocity, namely alternation frequency, is positively proportional to exit area, but inversely proportional to cavity volume and throat length. Theoretical value of alternation period (305 µs) agrees well with the experimental value.

Keywords: plasma, synthetic jet, actuator, frequency effect

Procedia PDF Downloads 225
156 Carlos Guillermo 'Cubena' Wilson's Literary Texts as Platforms for Social Commentary and Critique of Panamanian Society

Authors: Laverne Seales

Abstract:

When most people think of Panama, they immediately think of the Canal; however, the construction and the people who made it possible are often omitted and seldom acknowledged. The reality is that the construction of this waterway was achieved through forced migration and discriminatory practices toward people of African descent, specifically black people from the Caribbean. From the colonial period to the opening and subsequent operation of the Panama Canal by the United States, this paper goes through the rich layers of Panamanian history to examine the life of Afro-Caribbeans and their descendants in Panama. It also considers the role of the United States in Panama; it explores how the United States in Panama forged a racially complex country that made the integration of Afro-Caribbeans and their descendants difficult. After laying a historical foundation, the exploration of Afro-Caribbean people and Panamanians of Afro-Caribbean descent are analyzed through Afro-Panamanian writer Carlos Guillermo ‘Cubena’ Wilson's novels, short stories, and poetry. This study focuses on how Cubena addresses racism, discrimination, inequality, and social justice issues towards Afro-Caribbeans and their descendants who traveled to Panama to construct the Canal. Content analysis methodology can yield several significant contributions, and analyzing Carlos Guillermo Wilson's literature under this framework allows us to consider social commentary and critique of Panamanian society. It identifies the social issues and concerns of Afro-Caribbeans and people of Afro-Caribbean descent, such as inequality, corruption, racism, political oppression, and cultural identity. Analysis methodology allows us to explore how Cubena's literature engages with questions of cultural identity and belonging in Panamanian society. By examining themes related to race, ethnicity, language, and heritage, this research uncovers the complexities of Panamanian cultural identity, allowing us to interrogate power dynamics and social hierarchies in Panamanian society. Analyzing the portrayal of different social groups, institutions, and power structures helps uncover how power is wielded, contested, and resisted; Cubena's fictional world allows us to see how it functions in Panama. Content analysis methodology also provides for critiquing political systems and governance in Panama. By examining the representation and presentation of political figures, institutions, and events in Cubena's literature, we uncover his commentary on corruption, authoritarianism, governance, and the role of the United States in Panama. Content analysis highlights how Wilson's literature amplifies the voices and experiences of marginalized individuals and communities in Panamanian society. By centering the narratives of Afro-Panamanians and other marginalized groups, this researcher uncovers Cubena's commitment to social justice and inclusion in his writing and helps the reader engage with historical narratives and collective memory in Panama. Overall, analyzing Carlos Guillermo ‘Cubena’ Wilson's literature as a platform for social commentary and critique of Panamanian society using content analysis methodology provides valuable insights into the cultural, social, and political dimensions of Afro-Panamanians during and after the construction of the Panama Canal.

Keywords: Afro-Caribbean, Panama Canal, race, Afro-Panamanian, identity, history

Procedia PDF Downloads 12
155 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 109
154 Optimization of Multi-Disciplinary Expertise and Resource for End-Stage Renal Failure (ESRF) Patient Care

Authors: Mohamed Naser Zainol, P. P. Angeline Song

Abstract:

Over the years, the profile of end-stage renal patients placed under The National Kidney Foundation Singapore (NKFS) dialysis program has evolved, with a gradual incline in the number of patients with behavior-related issues. With these challenging profiles, social workers and counsellors are often expected to oversee behavior management, through referrals from its partnering colleagues. Due to the segregation of tasks usually found in many hospital-based multi-disciplinary settings, social workers’ and counsellors’ interventions are often seen as an endpoint, limiting other stakeholders’ involvement that could otherwise be potentially crucial in managing such patients. While patients’ contact in local hospitals often leads to eventual discharge, NKFS patients are mostly long term. It is interesting to note that these patients are regularly seen by a team of professionals that includes doctors, nurses, dietitians, exercise specialists in NKFS. The dynamism of relationships presents an opportunity for any of these professionals to take ownership of their potentials in leading interventions that can be helpful to patients. As such, it is important to have a framework that incorporates the strength of these professionals and also channels empowerment across the multi-disciplinary team in working towards wholistic patient care. This paper would like to suggest a new framework for NKFS’s multi-disciplinary team, where the group synergy and dynamics are used to encourage ownership and promote empowerment. The social worker and counsellor use group work skills and his/her knowledge of its members’ strengths, to generate constructive solutions that are centered towards patient’s growth. Using key ideas from Karl’s Tomm Interpersonal Communications, the Communication Management of Meaning and Motivational Interviewing, the social worker and counsellor through a series of guided meeting with other colleagues, facilitates the transmission of understanding, responsibility sharing and tapping on team resources for patient care. As a result, the patient can experience personal and concerted approach and begins to flow in a direction that is helpful for him. Using seven case studies of identified patients with behavioral issues, the social worker and counsellor apply this framework for a period of six months. Patient’s overall improvement through interventions as a result of this framework are recorded using the AB single case design, with baseline measured three months before referral. Interviews with patients and their families, as well as other colleagues that are not part of the multi-disciplinary team are solicited at mid and end points to gather their experiences about patient’s progress as a by-product of this framework. Expert interviews will be conducted on each member of the multi-disciplinary team to study their observations and experience in using this new framework. Hence, this exploratory framework hopes to identify the inherent usefulness in managing patients with behavior related issues. Moreover, it would provide indicators in improving aspects of the framework when applied to a larger population.

Keywords: behavior management, end-stage renal failure, satellite dialysis, multi-disciplinary team

Procedia PDF Downloads 120
153 Benefits of High Power Impulse Magnetron Sputtering (HiPIMS) Method for Preparation of Transparent Indium Gallium Zinc Oxide (IGZO) Thin Films

Authors: Pavel Baroch, Jiri Rezek, Michal Prochazka, Tomas Kozak, Jiri Houska

Abstract:

Transparent semiconducting amorphous IGZO films have attracted great attention due to their excellent electrical properties and possible utilization in thin film transistors or in photovoltaic applications as they show 20-50 times higher mobility than that of amorphous silicon. It is also known that the properties of IGZO films are highly sensitive to process parameters, especially to oxygen partial pressure. In this study we have focused on the comparison of properties of transparent semiconducting amorphous indium gallium zinc oxide (IGZO) thin films prepared by conventional sputtering methods and those prepared by high power impulse magnetron sputtering (HiPIMS) method. Furthermore we tried to optimize electrical and optical properties of the IGZO thin films and to investigate possibility to apply these coatings on thermally sensitive flexible substrates. We employed dc, pulsed dc, mid frequency sine wave and HiPIMS power supplies for magnetron deposition. Magnetrons were equipped with sintered ceramic InGaZnO targets. As oxygen vacancies are considered to be the main source of the carriers in IGZO films, it is expected that with the increase of oxygen partial pressure number of oxygen vacancies decreases which results in the increase of film resistivity. Therefore in all experiments we focused on the effect of oxygen partial pressure, discharge power and pulsed power mode on the electrical, optical and mechanical properties of IGZO thin films and also on the thermal load deposited to the substrate. As expected, we have observed a very fast transition between low- and high-resistivity films depending on oxygen partial pressure when deposition using conventional sputtering methods/power supplies have been utilized. Therefore we established and utilized HiPIMS sputtering system for enlargement of operation window for better control of IGZO thin film properties. It is shown that with this system we are able to effectively eliminate steep transition between low and high resistivity films exhibited by DC mode of sputtering and the electrical resistivity can be effectively controlled in the wide resistivity range of 10-² to 10⁵ Ω.cm. The highest mobility of charge carriers (up to 50 cm2/V.s) was obtained at very low oxygen partial pressures. Utilization of HiPIMS also led to significant decrease in thermal load deposited to the substrate which is beneficial for deposition on the thermally sensitive and flexible polymer substrates. Deposition rate as a function of discharge power and oxygen partial pressure was also systematically investigated and the results from optical, electrical and structure analysis will be discussed in detail. Most important result which we have obtained demonstrates almost linear control of IGZO thin films resistivity with increasing of oxygen partial pressure utilizing HiPIMS mode of sputtering and highly transparent films with low resistivity were prepared already at low pO2. It was also found that utilization of HiPIMS technique resulted in significant improvement of surface smoothness in reactive mode of sputtering (with increasing of oxygen partial pressure).

Keywords: charge carrier mobility, HiPIMS, IGZO, resistivity

Procedia PDF Downloads 271
152 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 87
151 Creation of a Test Machine for the Scientific Investigation of Chain Shot

Authors: Mark McGuire, Eric Shannon, John Parmigiani

Abstract:

Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.

Keywords: chain shot, safety, testing, timber harvesters

Procedia PDF Downloads 124
150 PARP1 Links Transcription of a Subset of RBL2-Dependent Genes with Cell Cycle Progression

Authors: Ewelina Wisnik, Zsolt Regdon, Kinga Chmielewska, Laszlo Virag, Agnieszka Robaszkiewicz

Abstract:

Apart from protecting genome, PARP1 has been documented to regulate many intracellular processes inter alia gene transcription by physically interacting with chromatin bound proteins and by their ADP-ribosylation. Our recent findings indicate that expression of PARP1 decreases during the differentiation of human CD34+ hematopoietic stem cells to monocytes as a consequence of differentiation-associated cell growth arrest and formation of E2F4-RBL2-HDAC1-SWI/SNF repressive complex at the promoter of this gene. Since the RBL2 complexes repress genes in a E2F-dependent manner and are widespread in the genome in G0 arrested cells, we asked (a) if RBL2 directly contributes to defining monocyte phenotype and function by targeting gene promoters and (b) if RBL2 controls gene transcription indirectly by repressing PARP1. For identification of genes controlled by RBL2 and/or PARP1,we used primer libraries for surface receptors and TLR signaling mediators, genes were silenced by siRNA or shRNA, analysis of gene promoter occupation by selected proteins was carried out by ChIP-qPCR, while statistical analysis in GraphPad Prism 5 and STATISTICA, ChIP-Seq data were analysed in Galaxy 2.5.0.0. On the list of 28 genes regulated by RBL2, we identified only four solely repressed by RBL2-E2F4-HDAC1-BRM complex. Surprisingly, 24 out of 28 emerged genes controlled by RBL2 were co-regulated by PARP1 in six different manners. In one mode of RBL2/PARP1 co-operation, represented by MAP2K6 and MAPK3, PARP1 was found to associate with gene promoters upon RBL2 silencing, which was previously shown to restore PARP1 expression in monocytes. PARP1 effect on gene transcription was observed only in the presence of active EP300, which acetylated gene promoters and activated transcription. Further analysis revealed that PARP1 binding to MA2K6 and MAPK3 promoters enabled recruitment of EP300 in monocytes, while in proliferating cancer cell lines, which actively transcribe PARP1, this protein maintained EP300 at the promoters of MA2K6 and MAPK3. Genome-wide analysis revealed a similar distribution of PARP1 and EP300 around transcription start sites and the co-occupancy of some gene promoters by PARP1 and EP300 in cancer cells. Here, we described a new RBL2/PARP1/EP300 axis which controls gene transcription regardless of the cell type. In this model cell, cycle-dependent transcription of PARP1 regulates expression of some genes repressed by RBL2 upon cell cycle limitation. Thus, RBL2 may indirectly regulate transcription of some genes by controlling the expression of EP300-recruiting PARP1. Acknowledgement: This work was financed by Polish National Science Centre grants nr DEC-2013/11/D/NZ2/00033 and DEC-2015/19/N/NZ2/01735. L.V. is funded by the National Research, Development and Innovation Office grants GINOP-2.3.2-15-2016-00020 TUMORDNS, GINOP-2.3.2-15-2016-00048-STAYALIVE and OTKA K112336. AR is supported by Polish Ministry of Science and Higher Education 776/STYP/11/2016.

Keywords: retinoblastoma transcriptional co-repressor like 2 (RBL2), poly(ADP-ribose) polymerase 1 (PARP1), E1A binding protein p300 (EP300), monocytes

Procedia PDF Downloads 178
149 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density

Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany

Abstract:

Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.

Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination

Procedia PDF Downloads 241
148 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 98
147 Rhizobium leguminosarum: Selecting Strain and Exploring Delivery Systems for White Clover

Authors: Laura Villamizar, David Wright, Claudia Baena, Marie Foxwell, Maureen O'Callaghan

Abstract:

Leguminous crops can be self-sufficient for their nitrogen requirements when their roots are nodulated with an effective Rhizobium strain and for this reason seed or soil inoculation is practiced worldwide to ensure nodulation and nitrogen fixation in grain and forage legumes. The most widely used method of applying commercially available inoculants is using peat cultures which are coated onto seeds prior to sowing. In general, rhizobia survive well in peat, but some species die rapidly after inoculation onto seeds. The development of improved formulation methodology is essential to achieve extended persistence of rhizobia on seeds, and improved efficacy. Formulations could be solid or liquid. Most popular solid formulations or delivery systems are: wettable powders (WP), water dispersible granules (WG), and granules (DG). Liquid formulation generally are: suspension concentrates (SC) or emulsifiable concentrates (EC). In New Zealand, R. leguminosarum bv. trifolii strain TA1 has been used as a commercial inoculant for white clover over wide areas for many years. Seeds inoculation is carried out by mixing the seeds with inoculated peat, some adherents and lime, but rhizobial populations on stored seeds decline over several weeks due to a number of factors including desiccation and antibacterial compounds produced by the seeds. In order to develop a more stable and suitable delivery system to incorporate rhizobia in pastures, two strains of R. leguminosarum (TA1 and CC275e) and several formulations and processes were explored (peat granules, self-sticky peat for seed coating, emulsions and a powder containing spray dried microcapsules). Emulsions prepared with fresh broth of strain TA1 were very unstable under storage and after seed inoculation. Formulations where inoculated peat was used as the active ingredient were significantly more stable than those prepared with fresh broth. The strain CC275e was more tolerant to stress conditions generated during formulation and seed storage. Peat granules and peat inoculated seeds using strain CC275e maintained an acceptable loading of 108 CFU/g of granules or 105 CFU/g of seeds respectively, during six months of storage at room temperature. Strain CC275e inoculated on peat was also microencapsulated with a natural biopolymer by spray drying and after optimizing operational conditions, microparticles containing 107 CFU/g and a mean particle size between 10 and 30 micrometers were obtained. Survival of rhizobia during storage of the microcapsules is being assessed. The development of a stable product depends on selecting an active ingredient (microorganism), robust enough to tolerate some adverse conditions generated during formulation, storage, and commercialization and after its use in the field. However, the design and development of an adequate formulation, using compatible ingredients, optimization of the formulation process and selecting the appropriate delivery system, is possibly the best tool to overcome the poor survival of rhizobia and provide farmers with better quality inoculants to use.

Keywords: formulation, Rhizobium leguminosarum, storage stability, white clover

Procedia PDF Downloads 129
146 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 155
145 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling

Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes

Abstract:

Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.

Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling

Procedia PDF Downloads 61
144 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 42
143 Technology Optimization of Compressed Natural Gas Home Fast Refueling Units

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Robert Strods, Adam Szurlej

Abstract:

Despіte all glоbal ecоnоmіc shіfts and the fact that Natural Gas іs recоgnіzed wоrldwіde as the maіn and the leadіng alternatіve tо оіl prоducts іn transpоrtatіоn sectоr, there іs a huge barrіer tо swіtch passenger vehіcle segment tо Natural gas - the lack оf refuelіng іnfrastructure fоr Natural Gas Vehіcles. Whіle іnvestments іn publіc gas statіоns requіre establіshed NGV market іn оrder tо be cоst effectіve, the market іs nоt there due tо lack оf refuelіng statіоns. The key tо sоlvіng that prоblem and prоvіdіng barrіer breakіng refuelіng іnfrastructure sоlutіоn fоr Natural Gas Vehіcles (NGV) іs Hоme Fast Refuelіng Unіts. Іt оperates usіng Natural Gas (Methane), whіch іs beіng prоvіded thrоugh gas pіpelіnes at clіents hоme, and electrіcіty cоnnectіоn pоіnt. Іt enables an envіrоnmentally frіendly NGV’s hоme refuelіng just іn mіnutes. The underlyіng technоlоgy іs a patented technоlоgy оf оne stage hydraulіc cоmpressоr (іnstead оf multіstage mechanіcal cоmpressоr technоlоgy avaіlable оn the market nоw) whіch prоvіdes the pоssіbіlіty tо cоmpress lоw pressure gas frоm resіdentіal gas grіd tо 200 bar fоr іts further usage as a fuel fоr NGVs іn the mоst ecоnоmіcally effіcіent and cоnvenіent fоr custоmer way. Descrіptіоn оf wоrkіng algоrіthm: Twо hіgh pressure cylіnders wіth upper necks cоnnected tо lоw pressure gas sоurce are placed vertіcally. Іnіtіally оne оf them іs fіlled wіth lіquіd and anоther оne – wіth lоw pressure gas. Durіng the wоrkіng prоcess lіquіd іs transferred by means оf hydraulіc pump frоm оne cylіnder tо anоther and back. Wоrkіng lіquіd plays a rоle оf pіstоns іnsіde cylіnders. Mоvement оf wоrkіng lіquіd іnsіde cylіnders prоvіdes sіmultaneоus suctіоn оf a pоrtіоn оf lоw pressure gas іntо оne оf the cylіnder (where lіquіd mоves dоwn) and fоrcіng оut gas оf hіgher pressure frоm anоther cylіnder (where lіquіd mоves up) tо the fuel tank оf the vehіcle / stоrage tank. Each cycle оf fоrcіng the gas оut оf the cylіnder rіses up the pressure оf gas іn the fuel tank оf a vehіcle wіth 2 cylіnders. The prоcess іs repeated untіl the pressure оf gas іn the fuel tank reaches 200 bar. Mоbіlіty has becоme a necessіty іn peоple’s everyday lіfe, whіch led tо оіl dependence. CNG Hоme Fast Refuelіng Unіts can become a part fоr exіstіng natural gas pіpelіne іnfrastructure and becоme the largest vehіcle refuelіng іnfrastructure. Hоme Fast Refuelіng Unіts оwners wіll enjоy day-tо-day tіme savіngs and cоnvenіence - Hоme Car refuelіng іn mіnutes, mоnth-tо-mоnth fuel cоst ecоnоmy, year-tо-year іncentіves and tax deductіbles оn NG refuelіng systems as per cоuntry, reduce CО2 lоcal emіssіоns, savіng cоsts and mоney.

Keywords: CNG (compressed natural gas), CNG stations, NGVs (natural gas vehicles), natural gas

Procedia PDF Downloads 180
142 Conceptual and Preliminary Design of Landmine Searching UAS at Extreme Environmental Condition

Authors: Gopalasingam Daisan

Abstract:

Landmines and ammunitions have been creating a significant threat to the people and animals, after the war, the landmines remain in the land and it plays a vital role in civilian’s security. Especially the Children are at the highest risk because they are curious. After all, an unexploded bomb can look like a tempting toy to an inquisitive child. The initial step of designing the UAS (Unmanned Aircraft Systems) for landmine detection is to choose an appropriate and effective sensor to locate the landmines and other unexploded ammunitions. The sensor weight and other components related to the sensor supporting device’s weight are taken as a payload weight. The mission requirement is to find the landmines in a particular area by making a proper path that will cover all the vicinity in the desired area. The weight estimation of the UAV (Unmanned Aerial Vehicle) can be estimated by various techniques discovered previously with good accuracy at the first phase of the design. The next crucial part of the design is to calculate the power requirement and the wing loading calculations. The matching plot techniques are used to determine the thrust-to-weight ratio, and this technique makes this process not only easiest but also precisely. The wing loading can be calculated easily from the stall equation. After these calculations, the wing area is determined from the wing loading equation and the required power is calculated from the thrust to weight ratio calculations. According to the power requirement, an appropriate engine can be selected from the available engine from the market. And the wing geometric parameter is chosen based on the conceptual sketch. The important steps in the wing design to choose proper aerofoil and which will ensure to create sufficient lift coefficient to satisfy the requirements. The next component is the tail; the tail area and other related parameters can be estimated or calculated to counteract the effect of the wing pitching moment. As the vertical tail design depends on many parameters, the initial sizing only can be done in this phase. The fuselage is another major component, which is selected based on the slenderness ratio, and also the shape is determined on the sensor size to fit it under the fuselage. The landing gear is one of the important components which is selected based on the controllability and stability requirements. The minimum and maximum wheel track and wheelbase can be determined based on the crosswind and overturn angle requirements. The minor components of the landing gear design and estimation are not the focus of this project. Another important task is to calculate the weight of the major components and it is going to be estimated using empirical relations and also the mass is added to each such component. The CG and moment of inertia are also determined to each component separately. The sensitivity of the weight calculation is taken into consideration to avoid extra material requirements and also reduce the cost of the design. Finally, the aircraft performance is calculated, especially the V-n (velocity and load factor) diagram for different flight conditions such as not disturbed and with gust velocity.

Keywords: landmine, UAS, matching plot, optimization

Procedia PDF Downloads 145