Search results for: interpenetrating polymer network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5967

Search results for: interpenetrating polymer network

957 The Role of Information Technology in Supply Chain Management

Authors: V. Jagadeesh, K. Venkata Subbaiah, P. Govinda Rao

Abstract:

This paper explaining about the significance of information technology tools and software packages in supply chain management (SCM) in order to manage the entire supply chain. Managing materials flow and financial flow and information flow effectively and efficiently with the aid of information technology tools and packages in order to deliver right quantity with right quality of goods at right time by using right methods and technology. Information technology plays a vital role in streamlining the sales forecasting and demand planning and Inventory control and transportation in supply networks and finally deals with production planning and scheduling. It achieves the objectives by streamlining the business process and integrates within the enterprise and its extended enterprise. SCM starts with customer and it involves sequence of activities from customer, retailer, distributor, manufacturer and supplier within the supply chain framework. It is the process of integrating demand planning and supply network planning and production planning and control. Forecasting indicates the direction for planning raw materials in order to meet the production planning requirements. Inventory control and transportation planning allocate the optimal or economic order quantity by utilizing shortest possible routes to deliver the goods to the customer. Production planning and control utilize the optimal resources mix in order to meet the capacity requirement planning. The above operations can be achieved by using appropriate information technology tools and software packages for the supply chain management.

Keywords: supply chain management, information technology, business process, extended enterprise

Procedia PDF Downloads 342
956 Preparation of Wireless Networks and Security; Challenges in Efficient Accession of Encrypted Data in Healthcare

Authors: M. Zayoud, S. Oueida, S. Ionescu, P. AbiChar

Abstract:

Background: Wireless sensor network is encompassed of diversified tools of information technology, which is widely applied in a range of domains, including military surveillance, weather forecasting, and earthquake forecasting. Strengthened grounds are always developed for wireless sensor networks, which usually emerges security issues during professional application. Thus, essential technological tools are necessary to be assessed for secure aggregation of data. Moreover, such practices have to be incorporated in the healthcare practices that shall be serving in the best of the mutual interest Objective: Aggregation of encrypted data has been assessed through homomorphic stream cipher to assure its effectiveness along with providing the optimum solutions to the field of healthcare. Methods: An experimental design has been incorporated, which utilized newly developed cipher along with CPU-constrained devices. Modular additions have also been employed to evaluate the nature of aggregated data. The processes of homomorphic stream cipher have been highlighted through different sensors and modular additions. Results: Homomorphic stream cipher has been recognized as simple and secure process, which has allowed efficient aggregation of encrypted data. In addition, the application has led its way to the improvisation of the healthcare practices. Statistical values can be easily computed through the aggregation on the basis of selected cipher. Sensed data in accordance with variance, mean, and standard deviation has also been computed through the selected tool. Conclusion: It can be concluded that homomorphic stream cipher can be an ideal tool for appropriate aggregation of data. Alongside, it shall also provide the best solutions to the healthcare sector.

Keywords: aggregation, cipher, homomorphic stream, encryption

Procedia PDF Downloads 225
955 A Survey of WhatsApp as a Tool for Instructor-Learner Dialogue, Learner-Content Dialogue, and Learner-Learner Dialogue

Authors: Ebrahim Panah, Muhammad Yasir Babar

Abstract:

Thanks to the development of online technology and social networks, people are able to communicate as well as learn. WhatsApp is a popular social network which is growingly gaining popularity. This app can be used for communication as well as education. It can be used for instructor-learner, learner-learner, and learner-content interactions; however, very little knowledge is available on these potentials of WhatsApp. The current study was undertaken to investigate university students’ perceptions of WhatsApp used as a tool for instructor-learner dialogue, learner-content dialogue, and learner-learner dialogue. The study adopted a survey approach and distributed the questionnaire developed by Google Forms to 54 (11 males and 43 females) university students. The obtained data were analyzed using SPSS version 20. The result of data analysis indicates that students have positive attitudes towards WhatsApp as a tool for Instructor-Learner Dialogue: it easy to reach the lecturer (4.07), the instructor gives me valuable feedback on my assignment (4.02), the instructor is supportive during course discussion and offers continuous support with the class (4.00). Learner-Content Dialogue: WhatsApp allows me to academically engage with lecturers anytime, anywhere (4.00), it helps to send graphics such as pictures or charts directly to the students (3.98), it also provides out of class, extra learning materials and homework (3.96), and Learner-Learner Dialogue: WhatsApp is a good tool for sharing knowledge with others (4.09), WhatsApp allows me to academically engage with peers anytime, anywhere (4.07), and we can interact with others through the use of group discussion (4.02). It was also found that there are significant positive correlations between students’ perceptions of Instructor-Learner Dialogue (ILD), Learner-Content Dialogue (LCD), Learner-Learner Dialogue (LLD) and WhatsApp Application in classroom. The findings of the study have implications for lectures, policy makers and curriculum developers.

Keywords: instructor-learner dialogue, learners-contents dialogue, learner-learner dialogue, whatsapp application

Procedia PDF Downloads 127
954 Empowering Women through the Fishermen of Functional Skills for City Gorontalo Indonesia

Authors: Abdul Rahmat

Abstract:

Community-based education in the economic empowerment of the family is an attempt to accelerate human development index (HDI) Dumbo Kingdom District of Gorontalo economics (purchasing power) program developed in this activity is the manufacture of functional skills shredded fish, fish balls, fish nuggets, chips anchovies, and corn sticks fish. The target audience of this activity is fishing se mothers subdistrict Dumbo Kingdom include Talumolo Village, Village Botu, Kampung Bugis Village, Village North and Sub Leato South Leato that each village is represented by 20 participants so totaling 100 participants. Time activities beginning in October s/d November 2014 held once a week on every Saturday at 9.00 s/d 13:00/14:00. From the results of the learning process of testing the skills of functional skills of making shredded fish, fish balls, fish nuggets, chips anchovies, fish and corn sticks residents have additional knowledge and experience are: 1) Order the concept include: nutrient content, processing food with fish raw materials , variations in taste, packaging, pricing and marketing sales. 2) Products made: in accordance with the wishes of the residents learned that estimated Eligible selling, product packaging logo creation, preparation and realization of the establishment of Business Study Group (KBU) and pioneered the marketing network with restaurant, store / shop staple food vendors that are around CLC.

Keywords: community development, functional skills, gender, HDI

Procedia PDF Downloads 283
953 Trauma System in England: An Overview and Future Directions

Authors: Raheel Shakoor Siddiqui, Sanjay Narayana Murthy, Manikandar Srinivas Cheruvu, Kash Akhtar

Abstract:

Major trauma is a dynamic public health epidemic that is continuously evolving. Major trauma care services rely on multi-disciplinary team input involving highly trained pre and in-hospital critical care teams. Pre-hospital critical care teams (PHCCTs), major trauma centres (MTCs), trauma units, and rehabilitation facilities all form an efficient and organised trauma system. England comprises 27 MTCs funded by the National Health Service (NHS). Major trauma care entails enhanced resuscitation protocols coupled with the expertise of dedicated trauma teams and rapid radiological imaging to improve trauma outcomes. Literature reports a change in the demographic of major trauma as elderly patients (silver trauma) with injuries sustained from a fall of 2 metres or less commonly present to services. Evidence of an increasing population age with multiple comorbidities necessitates treatment within the first hour of injury (golden hour) to improve trauma survival outcomes. Staffing and funding pressures within the NHS have subsequently led to a shortfall of available physician-led PHCCTs. Thus, there is a strong emphasis on targeted research and funding to appropriately deploy resources to deprived areas. This review article will discuss the current English trauma system whilst critically appraising present challenges, identifying insufficiencies, and recommending aims for an improved future trauma system in England.

Keywords: trauma, orthopaedics, major trauma, trauma system, trauma network

Procedia PDF Downloads 150
952 The Mediating Role of Psychological Factors in the Relationships Between Youth Problematic Internet and Subjective Well-Being

Authors: Dorit Olenik-Shemesh, Tali Heiman

Abstract:

The rapid increase in the massive use of the internet in recent yearshas led to an increase in the prevalence of a phenomenon called 'Problematic Internet use' (PIU), an emerging, growing health problem, especially during adolescents, that poses a challenge for mental health research and practitioners. Problematic Internet use (PIU) is defined as an excessive overuse of the internet, including an inability to control time spent on the internet, cognitivepreoccupation with the Internet, and continued use in spite of the adverse consequences, which may lead to psychological, social, and academic difficulties in one's life and daily functioning. However, little is known about the nature of the nexusbetween PIU and subjective well-being among adolescents. The main purpose of the current study was to explore in depth the network of connections between PIU, sense of well-being, and fourpersonal-emotional factors (resilience, self-control, depressive mood, and loneliness) that may mediate these relationships. A total sample of 433 adolescents, 214 (49.4%) girls and 219 (50.6%) boys between the ages of 12–17 (mean = 14.9, SD = 2.16), completed self-reportquestionnaires relating to the study variables. In line with the hypothesis, analysis of a Structural Equation modeling (SEM) revealed the main following results: high levels of PIU predicted low levels of well-being among adolescents. In addition, low levels of resilience and high levels of depressivemood (together), as well as low levels of self control and high levels of depressivemood (together), as well as low levels of resilience and high levels of loneliness, mediated the relationships between PIU and well-being. In general, girls were found to be higher in PIU and inresilience than boys. The study results revealed specific implications for developing intervention programs for adolescents in the context of PIU; aiming at more balanced adjusted use of the Internet along withpreventingthe decrease in well being.

Keywords: probelmatic inetrent Use, well-being, adolescents, SEM model

Procedia PDF Downloads 140
951 Islamisation and Actor Networking in Halal Tourism: A Case Study in Central Java, Indonesia

Authors: Hariyadi, Rili Windiasih

Abstract:

Halal tourism is a recent global phenomenon that emerged out of the needs of Muslim tourists. However, works on halal tourism rarely discuss the connection of it to the rising religiosity in Indonesia since halal tourism has been mostly studied in the sphere of tourism, business, and management studies. A few works on the increase of Islamic expressions in Indonesia do mention the recent booming of non-mandatory pilgrimage to Mecca and the emergence of sharia compliant accommodation, yet they do not go into details on the issue. To our best knowledge, there is a lack of more critical, cultural political studies on halal tourism in which the paper attempts to fill in. The paper is a result of fieldwork research in Central Java, Indonesia. The study focuses on sacred sites for pilgrimage and sharia-compliant hotels. It combines in-depth interviews and participatory observation methods to gather the data. It is important for us to take a look at the network of halal tourism actors (businessperson, local government, clerics, etc.) in Central Java, how they conceive halal tourism, and how their networking shape halal tourism discourses, policies, and practices. Despite having numerous Islamic pilgrimage places and being designated by the Ministry of Tourism as one of 12 Muslim friendly tourist destinations, the province is not yet widely recognised as the main destination for halal tourism as it is known as the place for more secular, nationalist groups rather than for more Islamic oriented ones. However, in some of its municipalities, there is increasingly more attention to develop halal tourism. In this study, we found out that the development of halal tourism in Central Java connected to dynamics of Islamisation and ideological competition as well as the influence of the more pragmatist businesspersons in a 'nationalist province' in Indonesia.

Keywords: actor networking, halal tourism, Islamisation, Indonesia

Procedia PDF Downloads 139
950 InSAR Times-Series Phase Unwrapping for Urban Areas

Authors: Hui Luo, Zhenhong Li, Zhen Dong

Abstract:

The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.

Keywords: phase unwrapping, time series, InSAR, urban areas

Procedia PDF Downloads 121
949 Design and Radio Frequency Characterization of Radial Reentrant Narrow Gap Cavity for the Inductive Output Tube

Authors: Meenu Kaushik, Ayon K. Bandhoyadhayay, Lalit M. Joshi

Abstract:

Inductive output tubes (IOTs) are widely used as microwave power amplifiers for broadcast and scientific applications. It is capable of amplifying radio frequency (RF) power with very good efficiency. Its compactness, reliability, high efficiency, high linearity and low operating cost make this device suitable for various applications. The device consists of an integrated structure of electron gun and RF cavity, collector and focusing structure. The working principle of IOT is a combination of triode and klystron. The cathode lies in the electron gun produces a stream of electrons. A control grid is placed in close proximity to the cathode. Basically, the input part of IOT is the integrated structure of gridded electron gun which acts as an input cavity thereby providing the interaction gap where the input RF signal is applied to make it interact with the produced electron beam for supporting the amplification phenomena. The paper presents the design, fabrication and testing of a radial re-entrant cavity for implementing in the input structure of IOT at 350 MHz operating frequency. The model’s suitability has been discussed and a generalized mathematical relation has been introduced for getting the proper transverse magnetic (TM) resonating mode in the radial narrow gap RF cavities. The structural modeling has been carried out in CST and SUPERFISH codes. The cavity is fabricated with the Aluminum material and the RF characterization is done using vector network analyzer (VNA) and the results are presented for the resonant frequency peaks obtained in VNA.

Keywords: inductive output tubes, IOT, radial cavity, coaxial cavity, particle accelerators

Procedia PDF Downloads 89
948 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System

Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya

Abstract:

The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.

Keywords: earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector

Procedia PDF Downloads 155
947 Covalently Conjugated Gold–Porphyrin Nanostructures

Authors: L. Spitaleri, C. M. A. Gangemi, R. Purrello, G. Nicotra, G. Trusso Sfrazzetto, G. Casella, M. Casarin, A. Gulino

Abstract:

Hybrid molecular–nanoparticle materials, obtained with a bottom-up approach, are suitable for the fabrication of functional nanostructures showing structural control and well-defined properties, i.e., optical, electronic or catalytic properties, in the perspective of applications in different fields of nanotechnology. Gold nanoparticles (Au NPs) exhibit important chemical, electronic and optical properties due to their size, shape and electronic structures. In fact, Au NPs containing no more than 30-40 atoms are only luminescent because they can be considered as large molecules with discrete energy levels, while nano-sized Au NPs only show the surface plasmon resonance. Hence, it appears that gold nanoparticles can alternatively be luminescent or plasmonic, and this represents a severe constraint for their use as an optical material. The aim of this work was the fabrication of nanoscale assembly of Au NPs covalently anchored to each other by means of novel bi-functional porphyrin molecules that work as bridges between different gold nanoparticles. This functional architecture shows a strong surface plasmon due to the Au nanoparticles and a strong luminescence signal coming from porphyrin molecules, thus, behaving like an artificial organized plasmonic and fluorescent network. The self-assembly geometry of this porphyrin on the Au NPs was studied by investigation of the conformational properties of the porphyrin derivative at the DFT level. The morphology, electronic structure and optical properties of the conjugated Au NPs – porphyrin system were investigated by TEM, XPS, UV–vis and Luminescence. The present nanostructures can be used for plasmon-enhanced fluorescence, photocatalysis, nonlinear optics, etc., under atmospheric conditions since our system is not reactive to air nor water and does not need to be stored in a vacuum or inert gas.

Keywords: gold nanoparticle, porphyrin, surface plasmon resonance, luminescence, nanostructures

Procedia PDF Downloads 126
946 Formulation of Lipid-Based Tableted Spray-Congealed Microparticles for Zero Order Release of Vildagliptin

Authors: Hend Ben Tkhayat , Khaled Al Zahabi, Husam Younes

Abstract:

Introduction: Vildagliptin (VG), a dipeptidyl peptidase-4 inhibitor (DPP-4), was proven to be an active agent for the treatment of type 2 diabetes. VG works by enhancing and prolonging the activity of incretins which improves insulin secretion and decreases glucagon release, therefore lowering blood glucose level. It is usually used with various classes, such as insulin sensitizers or metformin. VG is currently only marketed as an immediate-release tablet that is administered twice daily. In this project, we aim to formulate an extended-release with a zero-order profile tableted lipid microparticles of VG that could be administered once daily ensuring the patient’s convenience. Method: The spray-congealing technique was used to prepare VG microparticles. Compritol® was heated at 10 oC above its melting point and VG was dispersed in the molten carrier using a homogenizer (IKA T25- USA) set at 13000 rpm. VG dispersed in the molten Compritol® was added dropwise to the molten Gelucire® 50/13 and PEG® (400, 6000, and 35000) in different ratios under manual stirring. The molten mixture was homogenized and Carbomer® amount was added. The melt was pumped through the two-fluid nozzle of the Buchi® Spray-Congealer (Buchi B-290, Switzerland) using a Pump drive (Master flex, USA) connected to a silicone tubing wrapped with silicone heating tape heated at the same temperature of the pumped mix. The physicochemical properties of the produced VG-loaded microparticles were characterized using Mastersizer, Scanning Electron Microscope (SEM), Differential Scanning Calorimeter (DSC) and X‐Ray Diffractometer (XRD). VG microparticles were then pressed into tablets using a single punch tablet machine (YDP-12, Minhua pharmaceutical Co. China) and in vitro dissolution study was investigated using Agilent Dissolution Tester (Agilent, USA). The dissolution test was carried out at 37±0.5 °C for 24 hours in three different dissolution media and time phases. The quantitative analysis of VG in samples was realized using a validated High-Pressure Liquid Chromatography (HPLC-UV) method. Results: The microparticles were spherical in shape with narrow distribution and smooth surface. DSC and XRD analyses confirmed the crystallinity of VG that was lost after being incorporated into the amorphous polymers. The total yields of the different formulas were between 70% and 80%. The VG content in the microparticles was found to be between 99% and 106%. The in vitro dissolution study showed that VG was released from the tableted particles in a controlled fashion. The adjustment of the hydrophilic/hydrophobic ratio of excipients, their concentration and the molecular weight of the used carriers resulted in tablets with zero-order kinetics. The Gelucire 50/13®, a hydrophilic polymer was characterized by a time-dependent profile with an important burst effect that was decreased by adding Compritol® as a lipophilic carrier to retard the release of VG which is highly soluble in water. PEG® (400,6000 and 35 000) were used for their gelling effect that led to a constant rate delivery and achieving a zero-order profile. Conclusion: Tableted spray-congealed lipid microparticles for extended-release of VG were successfully prepared and a zero-order profile was achieved.

Keywords: vildagliptin, spray congealing, microparticles, controlled release

Procedia PDF Downloads 98
945 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 259
944 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 419
943 A Modular and Reusable Bond Graph Model of Epithelial Transport in the Proximal Convoluted Tubule

Authors: Leyla Noroozbabaee, David Nickerson

Abstract:

We introduce a modular, consistent, reusable bond graph model of the renal nephron’s proximal convoluted tubule (PCT), which can reproduce biological behaviour. In this work, we focus on ion and volume transport in the proximal convoluted tubule of the renal nephron. Modelling complex systems requires complex modelling problems to be broken down into manageable pieces. This can be enabled by developing models of subsystems that are subsequently coupled hierarchically. Because they are based on a graph structure. In the current work, we define two modular subsystems: the resistive module representing the membrane and the capacitive module representing solution compartments. Each module is analyzed based on thermodynamic processes, and all the subsystems are reintegrated into circuit theory in network thermodynamics. The epithelial transport system we introduce in the current study consists of five transport membranes and four solution compartments. Coupled dissipations in the system occur in the membrane subsystems and coupled free-energy increasing, or decreasing processes appear in solution compartment subsystems. These structural subsystems also consist of elementary thermodynamic processes: dissipations, free-energy change, and power conversions. We provide free and open access to the Python implementation to ensure our model is accessible, enabling the reader to explore the model through setting their simulations and reproducibility tests.

Keywords: Bond Graph, Epithelial Transport, Water Transport, Mathematical Modeling

Procedia PDF Downloads 56
942 Model-Based Fault Diagnosis in Carbon Fiber Reinforced Composites Using Particle Filtering

Authors: Hong Yu, Ion Matei

Abstract:

Carbon fiber reinforced composites (CFRP) used as aircraft structure are subject to lightning strike, putting structural integrity under risk. Indirect damage may occur after a lightning strike where the internal structure can be damaged due to excessive heat induced by lightning current, while the surface of the structures remains intact. Three damage modes may be observed after a lightning strike: fiber breakage, inter-ply delamination and intra-ply cracks. The assessment of internal damage states in composite is challenging due to complicated microstructure, inherent uncertainties, and existence of multiple damage modes. In this work, a model based approach is adopted to diagnose faults in carbon composites after lighting strikes. A resistor network model is implemented to relate the overall electrical and thermal conduction behavior under simulated lightning current waveform to the intrinsic temperature dependent material properties, microstructure and degradation of materials. A fault detection and identification (FDI) module utilizes the physics based model and a particle filtering algorithm to identify damage mode as well as calculate the probability of structural failure. Extensive simulation results are provided to substantiate the proposed fault diagnosis methodology with both single fault and multiple faults cases. The approach is also demonstrated on transient resistance data collected from a IM7/Epoxy laminate under simulated lightning strike.

Keywords: carbon composite, fault detection, fault identification, particle filter

Procedia PDF Downloads 172
941 Enhanced Solar-Driven Evaporation Process via F-Mwcnts/Pvdf Photothermal Membrane for Forward Osmosis Draw Solution Recovery

Authors: Ayat N. El-Shazly, Dina Magdy Abdo, Hamdy Maamoun Abdel-Ghafar, Xiangju Song, Heqing Jiang

Abstract:

Product water recovery and draw solution (DS) reuse is the most energy-intensive stage in forwarding osmosis (FO) technology. Sucrose solution is the most suitable DS for FO application in food and beverages. However, sucrose DS recovery by conventional pressure-driven or thermal-driven concentration techniques consumes high energy. Herein, we developed a spontaneous and sustainable solar-driven evaporation process based on a photothermal membrane for the concentration and recovery of sucrose solution. The photothermal membrane is composed of multi-walled carbon nanotubes (f-MWCNTs)photothermal layer on a hydrophilic polyvinylidene fluoride (PVDF) substrate. The f-MWCNTs photothermal layer with a rough surface and interconnected network structures not only improves the light-harvesting and light-to-heat conversion performance but also facilitates the transport of water molecules. The hydrophilic PVDF substrate can promote the rapid transport of water for adequate water supply to the photothermal layer. As a result, the optimized f-MWCNTs/PVDF photothermal membrane exhibits an excellent light absorption of 95%, and a high surface temperature of 74 °C at 1 kW m−2 . Besides, it realizes an evaporation rate of 1.17 kg m−2 h−1 for 5% (w/v) of sucrose solution, which is about 5 times higher than that of the natural evaporation. The designed photothermal evaporation process is capable of concentrating sucrose solution efficiently from 5% to 75% (w/v), which has great potential in FO process and juice concentration.

Keywords: solar, pothothermal, membrane, MWCNT

Procedia PDF Downloads 59
940 Homophily in Youth Athletics: Sociodemographics, Group Cohesion, and the Psychology of Performance in Sport

Authors: Brandon Ko

Abstract:

Whether it’s a kitchen staff or a law firm, many groups tend to have homogenous characteristics of race, gender, interests, and goals. Social groups are not typically random samples of the population and will usually have common identifiers. According to Blau, age, sex, and education all play salient roles in shaping relationships within members of society. So if there is some degree of homogeneity within groups, the question arises whether this is beneficial or harmful to a group’s effectiveness. There has been much disagreement in the scientific community as to whether the presence of homophily benefits or hinders an athletic team's cohesiveness. For this paper, a comparative study of research of soccer case studies that followed various, youth players was studied against examinations of the effects that such a culture has on athletes. The case studies were used as evidence to determine what kind of homophily existed within the soccer camps. One case study followed several European developmental clubs such as Bayern Munich and Barcelona. Another study followed eight different players, four of each gender, implementing a similar method of interviewing, observing, and questioning. The individual and team goals of each athlete were reviewed to see which teams and players were ego-oriented and which were team-oriented. Additionally, there had been little research done on the relationship between homophily and how it applies to the sport community, suggesting the need to develop this neglected problem in applied psychology. This paper argues that the benefits of an egalitarian culture and stronger relations with people of a similar socio-demographic outweigh the liabilities of cohesion like being stereotyped and a lack of network outside the group as produced by homophily in athletic competition.

Keywords: group cohesion, homophily, sports psychology, youth athletics

Procedia PDF Downloads 263
939 Dielectric Properties of Mineral Oil Blended with Soyabean Oil for Power Transformers: A Laboratory Investigation

Authors: Deepa S N, Srinivasan a D, Veeramanju K T

Abstract:

The power transformer is a critical equipment in the transmission and distribution network that must be managed to ensure uninterrupted power service. The liquid insulation is essential for the proper functioning of the transformer, as it serves as both coolant and insulating medium, which influences the transformer’s durability. Further, the insulating state of a power transformer has a significant impact on its reliability. Mineral oil derived from petroleum crude oil has been employed as liquid dielectrics for decades due to its superior functional characteristics, however as a resource for the same are getting depleted over the years. Research is undertaken across the globe to identify a viable substitute for mineral oil. Further, alternate insulating oils are being investigated for better environmental impact, biodegradability and economics. Several combinations of vegetable oil derived natural esters are being inspected by researchers across the globe in these domains. In this work, mineral oil is blended with soyabean oil with various proportions and dielectric properties such as dielectric breakdown voltage, relative permittivity, dissipation factor, viscosity, flash and fire point have been investigated according to international standards. A quantitative comparison is made among various samples and is observed that the blended oil sample of equal proportion of mineral oil and soyabean oil, MO50+SO50 exhibits superior dielectric properties such as breakdown voltage of 65kV, dissipation factor of 0.0044, relative permittivity of 3.1680 that are closer to the range of values recommended for power transformer applications. Also, Breakdown voltage values of all the investigated oil samples obeyed the Weibull and Normal probability distribution.

Keywords: blended oil, dielectric breakdown, liquid insulation, power transformer

Procedia PDF Downloads 56
938 Supplier Selection Using Sustainable Criteria in Sustainable Supply Chain Management

Authors: Richa Grover, Rahul Grover, V. Balaji Rao, Kavish Kejriwal

Abstract:

Selection of suppliers is a crucial problem in the supply chain management. On top of that, sustainable supplier selection is the biggest challenge for the organizations. Environment protection and social problems have been of concern to society in recent years, and the traditional supplier selection does not consider about this factor; therefore, this research work focuses on introducing sustainable criteria into the structure of supplier selection criteria. Sustainable Supply Chain Management (SSCM) is the management and administration of material, information, and money flows, as well as coordination among business along the supply chain. All three dimensions - economic, environmental, and social - of sustainable development needs to be taken care of. Purpose of this research is to maximize supply chain profitability, maximize social wellbeing of supply chain and minimize environmental impacts. Problem statement is selection of suppliers in a sustainable supply chain network by ranking the suppliers against sustainable criteria identified. The aim of this research is twofold: To find out what are the sustainable parameters that can be applied to the supply chain, and to determine how these parameters can effectively be used in supplier selection. Multicriteria decision making tools will be used to rank both criteria and suppliers. AHP Analysis will be used to find out ratings for the criteria identified. It is a technique used for efficient decision making. TOPSIS will be used to find out rating for suppliers and then ranking them. TOPSIS is a MCDM problem solving method which is based on the principle that the chosen option should have the maximum distance from the negative ideal solution (NIS) and the minimum distance from the ideal solution.

Keywords: sustainable supply chain management, sustainable criteria, MCDM tools, AHP analysis, TOPSIS method

Procedia PDF Downloads 297
937 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 342
936 DWDM Network Implementation in the Honduran Telecommunications Company "Hondutel"

Authors: Tannia Vindel, Carlos Mejia, Damaris Araujo, Carlos Velasquez, Darlin Trejo

Abstract:

The DWDM (Dense Wavelenght Division Multiplexing) is in constant growth around the world by consumer demand to meet their needs. Since its inception in this operation arises the need for a system which enable us to expand the communication of an entire nation to improve the computing trends of their societies according to their customs and geographical location. The Honduran Company of Telecommunications (HONDUTEL), provides the internet services and data transport technology with a PDH and SDH, which represents in the Republic of Honduras C. A., the option of viability for the consumer in terms of purchase value and its ease of acquisition; but does not have the efficiency in terms of technological advance and represents an obstacle that limits the long-term socio-economic development in comparison with other countries in the region and to be able to establish a competition between telecommunications companies that are engaged in this heading. For that reason we propose to establish a new technological trend implemented in Europe and that is applied in our country that allows us to provide a data transfer in broadband as it is DWDM, in this way we will have a stable service and quality that will allow us to compete in this globalized world, and that must be replaced by one that would provide a better service and which must be in the forefront. Once implemented the DWDM is build upon the existing resources, such as the equipment used, and you will be given life to a new stage providing a business image to the Republic of Honduras C,A, as a nation, to ensure the data transport and broadband internet to a meaningful relationship. Same benefits in the first instance to existing customers and to all the institutions were bidden to these public and private need of such services.

Keywords: demultiplexers, light detectors, multiplexers, optical amplifiers, optical fibers, PDH, SDH

Procedia PDF Downloads 227
935 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 246
934 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection

Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada

Abstract:

With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.

Keywords: machine learning, imbalanced data, data mining, big data

Procedia PDF Downloads 103
933 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 276
932 An Intelligent Transportation System for Safety and Integrated Management of Railway Crossings

Authors: M. Magrini, D. Moroni, G. Palazzese, G. Pieri, D. Azzarelli, A. Spada, L. Fanucci, O. Salvetti

Abstract:

Railway crossings are complex entities whose optimal management cannot be addressed unless with the help of an intelligent transportation system integrating information both on train and vehicular flows. In this paper, we propose an integrated system named SIMPLE (Railway Safety and Infrastructure for Mobility applied at level crossings) that, while providing unparalleled safety in railway level crossings, collects data on rail and road traffic and provides value-added services to citizens and commuters. Such services include for example alerts, via variable message signs to drivers and suggestions for alternative routes, towards a more sustainable, eco-friendly and efficient urban mobility. To achieve these goals, SIMPLE is organized as a System of Systems (SoS), with a modular architecture whose components range from specially-designed radar sensors for obstacle detection to smart ETSI M2M-compliant camera networks for urban traffic monitoring. Computational unit for performing forecast according to adaptive models of train and vehicular traffic are also included. The proposed system has been tested and validated during an extensive trial held in the mid-sized Italian town of Montecatini, a paradigmatic case where the rail network is inextricably linked with the fabric of the city. Results of the tests are reported and discussed.

Keywords: Intelligent Transportation Systems (ITS), railway, railroad crossing, smart camera networks, radar obstacle detection, real-time traffic optimization, IoT, ETSI M2M, transport safety

Procedia PDF Downloads 474
931 Extraction of Cellulose Nanofibrils from Pulp Using Enzymatic Pretreatment and Evaluation of Their Papermaking Potential

Authors: Ajay Kumar Singh, Arvind Kumar, S. P. Singh

Abstract:

Cellulose nanofibrils (CNF) have shown potential of their extensive use in various fields, including papermaking, due to their unique characteristics. In this study, CNF’s were prepared by fibrillating the pulp obtained from raw materials e.g. bagasse, hardwood and softwood using enzymatic pretreatment followed by mechanical refining. These nanofibrils, when examined under FE-SEM, show that partial fibrillation on fiber surface has resulted in production of nanofibers. Mixing these nanofibers with the unrefined and normally refined fibers show their reinforcing effect. This effect is manifested in observing the improvement in the physical and mechanical properties e.g. tensile index and burst index of paper. Tear index, however, was observed to decrease on blending with nanofibers. The optical properties of paper sheets made from blended fibers showed no significant change in comparison to those made from only mechanically refined pulp. Mixing of normal pulp fibers with nanofibers show increase in ºSR and consequent decrease in drainage rate. These changes observed in mechanical, optical and other physical properties of the paper sheets made from nanofibrils blended pulp have been tried to explain considering the distribution of the nanofibrils alongside microfibrils in the fibrous network. Since usually, paper/boards with higher strength are observed to have diminished optical properties which is a drawback in their quality, the present work has the potential for developing paper/boards having improved strength alongwith undiminished optical properties utilising the concepts of nanoscience and nanotechnology.

Keywords: enzymatic pretreatment, mechanical refining, nanofibrils, paper properties

Procedia PDF Downloads 325
930 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density

Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita

Abstract:

Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.

Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite

Procedia PDF Downloads 85
929 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 33
928 Time Series Analysis the Case of China and USA Trade Examining during Covid-19 Trade Enormity of Abnormal Pricing with the Exchange rate

Authors: Md. Mahadi Hasan Sany, Mumenunnessa Keya, Sharun Khushbu, Sheikh Abujar

Abstract:

Since the beginning of China's economic reform, trade between the U.S. and China has grown rapidly, and has increased since China's accession to the World Trade Organization in 2001. The US imports more than it exports from China, reducing the trade war between China and the U.S. for the 2019 trade deficit, but in 2020, the opposite happens. In international and U.S. trade, Washington launched a full-scale trade war against China in March 2016, which occurred a catastrophic epidemic. The main goal of our study is to measure and predict trade relations between China and the U.S., before and after the arrival of the COVID epidemic. The ML model uses different data as input but has no time dimension that is present in the time series models and is only able to predict the future from previously observed data. The LSTM (a well-known Recurrent Neural Network) model is applied as the best time series model for trading forecasting. We have been able to create a sustainable forecasting system in trade between China and the US by closely monitoring a dataset published by the State Website NZ Tatauranga Aotearoa from January 1, 2015, to April 30, 2021. Throughout the survey, we provided a 180-day forecast that outlined what would happen to trade between China and the US during COVID-19. In addition, we have illustrated that the LSTM model provides outstanding outcome in time series data analysis rather than RFR and SVR (e.g., both ML models). The study looks at how the current Covid outbreak affects China-US trade. As a comparative study, RMSE transmission rate is calculated for LSTM, RFR and SVR. From our time series analysis, it can be said that the LSTM model has given very favorable thoughts in terms of China-US trade on the future export situation.

Keywords: RFR, China-U.S. trade war, SVR, LSTM, deep learning, Covid-19, export value, forecasting, time series analysis

Procedia PDF Downloads 159