Search results for: product platform
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5536

Search results for: product platform

1066 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 219
1065 The Rapid Industrialization Model

Authors: Fredrick Etyang

Abstract:

This paper presents a Rapid Industrialization Model (RIM) designed to support existing industrialization policies, strategies and industrial development plans at National, Regional and Constituent level in Africa. The model will reinforce efforts to attainment of inclusive and sustainable industrialization of Africa by state and non-state actors. The overall objective of this model is to serve as a framework for rapid industrialization in developing economies and the specific objectives range from supporting rapid industrialization development to promoting a structural change in the economy, a balanced regional industrial growth, achievement of local, regional and international competitiveness in areas of clear comparative advantage in industrial exports and ultimately, the RIM will serve as a step-by-step guideline for the industrialization of African Economies. This model is a product of a scientific research process underpinned by desk research through the review of African countries development plans, strategies, datasets, industrialization efforts and consultation with key informants. The rigorous research process unearthed multi-directional and renewed efforts towards industrialization of Africa premised on collective commitment of individual states, regional economic communities and the African union commission among other strategic stakeholders. It was further, established that the inputs into industrialization of Africa outshine the levels of industrial development on the continent. The RIM comes in handy to serve as step-by-step framework for African countries to follow in their industrial development efforts of transforming inputs into tangible outputs and outcomes in the short, intermediate and long-run. This model postulates three stages of industrialization and three phases toward rapid industrialization of African economies, the model is simple to understand, easily implementable and contextualizable with high return on investment for each unit invested into industrialization supported by the model. Therefore, effective implementation of the model will result into inclusive and sustainable rapid industrialization of Africa.

Keywords: economic development, industrialization, economic efficiency, exports and imports

Procedia PDF Downloads 74
1064 Partnering with Stakeholders to Secure Digitization of Water

Authors: Sindhu Govardhan, Kenneth G. Crowther

Abstract:

Modernisation of the water sector is leading to increased connectivity and integration of emerging technologies with traditional ones, leading to new security risks. The convergence of Information Technology (IT) with Operation Technology (OT) results in solutions that are spread across larger geographic areas, increasingly consist of interconnected Industrial Internet of Things (IIOT) devices and software, rely on the integration of legacy with modern technologies, use of complex supply chain components leading to complex architectures and communication paths. The result is that multiple parties collectively own and operate these emergent technologies, threat actors find new paths to exploit, and traditional cybersecurity controls are inadequate. Our approach is to explicitly identify and draw data flows that cross trust boundaries between owners and operators of various aspects of these emerging and interconnected technologies. On these data flows, we layer potential attack vectors to create a frame of reference for evaluating possible risks against connected technologies. Finally, we identify where existing controls, mitigations, and other remediations exist across industry partners (e.g., suppliers, product vendors, integrators, water utilities, and regulators). From these, we are able to understand potential gaps in security, the roles in the supply chain that are most likely to effectively remediate those security gaps, and test cases to evaluate and strengthen security across these partners. This informs a “shared responsibility” solution that recognises that security is multi-layered and requires collaboration to be successful. This shared responsibility security framework improves visibility, understanding, and control across the entire supply chain, and particularly for those water utilities that are accountable for safe and continuous operations.

Keywords: cyber security, shared responsibility, IIOT, threat modelling

Procedia PDF Downloads 69
1063 The Application of Line Balancing Technique and Simulation Program to Increase Productivity in Hard Disk Drive Components

Authors: Alonggot Limcharoen, Jintana Wannarat, Vorawat Panich

Abstract:

This study aims to investigate the balancing of the number of operators (Line Balancing technique) in the production line of hard disk drive components in order to increase efficiency. At present, the trend of using hard disk drives has continuously declined leading to limits in a company’s revenue potential. It is important to improve and develop the production process to create market share and to have the ability to compete with competitors with a higher value and quality. Therefore, an effective tool is needed to support such matters. In this research, the Arena program was applied to analyze the results both before and after the improvement. Finally, the precedent was used before proceeding with the real process. There were 14 work stations with 35 operators altogether in the RA production process where this study was conducted. In the actual process, the average production time was 84.03 seconds per product piece (by timing 30 times in each work station) along with a rating assessment by implementing the Westinghouse principles. This process showed that the rating was 123% underlying an assumption of 5% allowance time. Consequently, the standard time was 108.53 seconds per piece. The Takt time was calculated from customer needs divided by working duration in one day; 3.66 seconds per piece. Of these, the proper number of operators was 30 people. That meant five operators should be eliminated in order to increase the production process. After that, a production model was created from the actual process by using the Arena program to confirm model reliability; the outputs from imitation were compared with the original (actual process) and this comparison indicated that the same output meaning was reliable. Then, worker numbers and their job responsibilities were remodeled into the Arena program. Lastly, the efficiency of production process enhanced from 70.82% to 82.63% according to the target.

Keywords: hard disk drive, line balancing, ECRS, simulation, arena program

Procedia PDF Downloads 220
1062 Blended Cloud Based Learning Approach in Information Technology Skills Training and Paperless Assessment: Case Study of University of Cape Coast

Authors: David Ofosu-Hamilton, John K. E. Edumadze

Abstract:

Universities have come to recognize the role Information and Communication Technology (ICT) skills plays in the daily activities of tertiary students. The ability to use ICT – essentially, computers and their diverse applications – are important resources that influence an individual’s economic and social participation and human capital development. Our society now increasingly relies on the Internet, and the Cloud as a means to communicate and disseminate information. The educated individual should, therefore, be able to use ICT to create and share knowledge that will improve society. It is, therefore, important that universities require incoming students to demonstrate a level of computer proficiency or trained to do so at a minimal cost by deploying advanced educational technologies. The training and standardized assessment of all in-coming first-year students of the University of Cape Coast in Information Technology Skills (ITS) have become a necessity as students’ most often than not highly overestimate their digital skill and digital ignorance is costly to any economy. The one-semester course is targeted at fresh students and aimed at enhancing the productivity and software skills of students. In this respect, emphasis is placed on skills that will enable students to be proficient in using Microsoft Office and Google Apps for Education for their academic work and future professional work whiles using emerging digital multimedia technologies in a safe, ethical, responsible, and legal manner. The course is delivered in blended mode - online and self-paced (student centered) using Alison’s free cloud-based tutorial (Moodle) of Microsoft Office videos. Online support is provided via discussion forums on the University’s Moodle platform and tutor-directed and assisted at the ICT Centre and Google E-learning laboratory. All students are required to register for the ITS course during either the first or second semester of the first year and must participate and complete it within a semester. Assessment focuses on Alison online assessment on Microsoft Office, Alison online assessment on ALISON ABC IT, Peer assessment on e-portfolio created using Google Apps/Office 365 and an End of Semester’s online assessment at the ICT Centre whenever the student was ready in the cause of the semester. This paper, therefore, focuses on the digital culture approach of hybrid teaching, learning and paperless examinations and the possible adoption by other courses or programs at the University of Cape Coast.

Keywords: assessment, blended, cloud, paperless

Procedia PDF Downloads 245
1061 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Egypt: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), CO2 emissions and gross domestic product (GDP) for Egypt using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen maximum likelihood method for co-integration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests some negative impacts of the CO2 emissions and the coal and natural gas use on the GDP. Conversely, a positive long-run causality from the electricity consumption to the GDP is found to be significant in Egypt during the period. In the short-run, some positive unidirectional causalities exist, running from the coal consumption to the GDP, and the CO2 emissions and the natural gas use. Further, the GDP and the electricity use are positively influenced by the consumption of petroleum products and the direct combustion of crude oil. Overall, the results support arguments that there are relationships among environmental quality, energy use, and economic output in both the short term and long term; however, the effects may differ due to the sources of energy, such as in the case of Egypt for the period of 1980-2010.

Keywords: CO2 emissions, Egypt, energy consumption, GDP, time series analysis

Procedia PDF Downloads 611
1060 MicroRNA-1246 Expression Associated with Resistance to Oncogenic BRAF Inhibitors in Mutant BRAF Melanoma Cells

Authors: Jae-Hyeon Kim, Michael Lee

Abstract:

Intrinsic and acquired resistance limits the therapeutic benefits of oncogenic BRAF inhibitors in melanoma. MicroRNAs (miRNA) regulate the expression of target mRNAs by repressing their translation. Thus, we investigated miRNA expression patterns in melanoma cell lines to identify candidate biomarkers for acquired resistance to BRAF inhibitor. Here, we used Affymetrix miRNA V3.0 microarray profiling platform to compare miRNA expression levels in three cell lines containing BRAF inhibitor-sensitive A375P BRAF V600E cells, their BRAF inhibitor-resistant counterparts (A375P/Mdr), and SK-MEL-2 BRAF-WT cells with intrinsic resistance to BRAF inhibitor. The miRNAs with at least a two-fold change in expression between BRAF inhibitor-sensitive and –resistant cell lines, were identified as differentially expressed. Averaged intensity measurements identified 138 and 217 miRNAs that were differentially expressed by 2 fold or more between: 1) A375P and A375P/Mdr; 2) A375P and SK-MEL-2, respectively. The hierarchical clustering revealed differences in miRNA expression profiles between BRAF inhibitor-sensitive and –resistant cell lines for miRNAs involved in intrinsic and acquired resistance to BRAF inhibitor. In particular, 43 miRNAs were identified whose expression was consistently altered in two BRAF inhibitor-resistant cell lines, regardless of intrinsic and acquired resistance. Twenty five miRNAs were consistently upregulated and 18 downregulated more than 2-fold. Although some discrepancies were detected when miRNA microarray data were compared with qPCR-measured expression levels, qRT-PCR for five miRNAs (miR-3617, miR-92a1, miR-1246, miR-1936-3p, and miR-17-3p) results showed excellent agreement with microarray experiments. To further investigate cellular functions of miRNAs, we examined effects on cell proliferation. Synthetic oligonucleotide miRNA mimics were transfected into three cell lines, and proliferation was quantified using a colorimetric assay. Of the 5 miRNAs tested, only miR-1246 altered cell proliferation of A375P/Mdr cells. The transfection of miR-1246 mimic strongly conferred PLX-4720 resistance to A375P/Mdr cells, implying that miR-1246 upregulation confers acquired resistance to BRAF inhibition. We also found that PLX-4720 caused much greater G2/M arrest in A375P/Mdr cells transfected with miR-1246mimic than that seen in scrambled RNA-transfected cells. Additionally, miR-1246 mimic partially caused a resistance to autophagy induction by PLX-4720. These results indicate that autophagy does play an essential death-promoting role inPLX-4720-induced cell death. Taken together, these results suggest that miRNA expression profiling in melanoma cells can provide valuable information for a network of BRAF inhibitor resistance-associated miRNAs.

Keywords: microRNA, BRAF inhibitor, drug resistance, autophagy

Procedia PDF Downloads 322
1059 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization

Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder

Abstract:

In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.

Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening

Procedia PDF Downloads 294
1058 Formulation and Invivo Evaluation of Salmeterol Xinafoate Loaded MDI for Asthma Using Response Surface Methodology

Authors: Paresh Patel, Priya Patel, Vaidehi Sorathiya, Navin Sheth

Abstract:

The aim of present work was to fabricate Salmeterol Xinafoate (SX) metered dose inhaler (MDI) for asthma and to evaluate the SX loaded solid lipid nanoparticles (SLNs) for pulmonary delivery. Solid lipid nanoparticles can be used to deliver particles to the lungs via MDI. A modified solvent emulsification diffusion technique was used to prepare Salmeterol Xinafoate loaded solid lipid nanoparticles by using compritol 888 ATO as lipid, tween 80 as surfactant, D-mannitol as cryoprotecting agent and L-leucine was used to improve aerosolization behaviour. Box-Behnken design was applied with 17 runs. 3-D surface response plots and contour plots were drawn and optimized formulation was selected based on minimum particle size and maximum % EE. % yield, in vitro diffusion study, scanning electron microscopy, X-ray diffraction, DSC, FTIR also characterized. Particle size, zeta potential analyzed by Zetatrac particle size analyzer and aerodynamic properties was carried out by cascade impactor. Pre convulsion time was examined for control group, treatment group and compare with marketed group. MDI was evaluated for leakage test, flammability test, spray test and content per puff. By experimental design, particle size and % EE found to be in range between 119-337 nm and 62.04-76.77% by solvent emulsification diffusion technique. Morphologically, particles have spherical shape and uniform distribution. DSC & FTIR study showed that no interaction between drug and excipients. Zeta potential shows good stability of SLNs. % respirable fraction found to be 52.78% indicating reach to the deep part of lung such as alveoli. Animal study showed that fabricated MDI protect the lungs against histamine induced bronchospasm in guinea pigs. MDI showed sphericity of particle in spray pattern, 96.34% content per puff and non-flammable. SLNs prepared by Solvent emulsification diffusion technique provide desirable size for deposition into the alveoli. This delivery platform opens up a wide range of treatment application of pulmonary disease like asthma via solid lipid nanoparticles.

Keywords: salmeterol xinafoate, solid lipid nanoparticles, box-behnken design, solvent emulsification diffusion technique, pulmonary delivery

Procedia PDF Downloads 448
1057 Environmental and Toxicological Impacts of Glyphosate with Its Formulating Adjuvant

Authors: I. Székács, Á. Fejes, S. Klátyik, E. Takács, D. Patkó, J. Pomóthy, M. Mörtl, R. Horváth, E. Madarász, B. Darvas, A. Székács

Abstract:

Environmental and toxicological characteristics of formulated pesticides may substantially differ from those of their active ingredients or other components alone. This phenomenon is demonstrated in the case of the herbicide active ingredient glyphosate. Due to its extensive application, this active ingredient was found in surface and ground water samples collected in Békés County, Hungary, in the concentration range of 0.54–0.98 ng/ml. The occurrence of glyphosate appeared to be somewhat higher at areas under intensive agriculture, industrial activities and public road services, but the compound was detected at areas under organic (ecological) farming or natural grasslands, indicating environmental mobility. Increased toxicity of the formulated herbicide product Roundup, compared to that of glyphosate was observed on the indicator aquatic organism Daphnia magna Straus. Acute LC50 values of Roundup and its formulating adjuvant Polyethoxylated Tallowamine (POEA) exceeded 20 and 3.1 mg/ml, respectively, while that of glyphosate (as isopropyl salt) was found to be substantially lower (690-900 mg/ml) showing good agreement with literature data. Cytotoxicity of Roundup, POEA and glyphosate has been determined on the neuroectodermal cell line, NE-4C measured both by cell viability test and holographic microscopy. Acute toxicity (LC50) of Roundup, POEA and glyphosate on NE-4C cells was found to be 0.013±0.002%, 0.017±0.009% and 6.46±2.25%, respectively (in equivalents of diluted Roundup solution), corresponding to 0.022±0.003 and 53.1±18.5 mg/ml for POEA and glyphosate, respectively, indicating no statistical difference between Roundup and POEA and 2.5 orders of magnitude difference between these and glyphosate. The same order of cellular toxicity seen in average cell area has been indicated under quantitative cell visualization. The results indicate that toxicity of the formulated herbicide is caused by the formulating agent, but in some parameters toxicological synergy occurs between POEA and glyphosate.

Keywords: glyphosate, polyethoxylated tallowamine, Roundup, combined aquatic and cellular toxicity, synergy

Procedia PDF Downloads 311
1056 Recovery of Selenium from Scrubber Sludge in Copper Process

Authors: Lakshmikanth Reddy, Bhavin Desai, Chandrakala Kari, Sanjay Sarkar, Pradeep Binu

Abstract:

The sulphur dioxide gases generated as a by-product of smelting and converting operations of copper concentrate contain selenium apart from zinc, lead, copper, cadmium, bismuth, antimony, and arsenic. The gaseous stream is treated in waste heat boiler, electrostatic precipitator and scrubbers to remove coarse particulate matter in order to produce commercial grade sulfuric acid. The gas cleaning section of the acid plant uses water to scrub the smelting gases. After scrubbing, the sludge settled at the bottom of the scrubber, was analyzed in present investigation. It was found to contain 30 to 40 wt% copper and selenium up to 40 wt% selenium. The sludge collected during blow-down is directly recycled to the smelter for copper recovery. However, the selenium is expected to again vaporize due to high oxidation potential during smelting and converting, causing accumulation of selenium in sludge. In present investigation, a roasting process has been developed to recover the selenium before the copper recovery from the sludge at smelter. Selenium is associated with copper in sludge as copper selenide, as determined by X-ray diffraction and electron microscopy. The thermodynamic and thermos-gravimetry study revealed that the copper selenide phase present in the sludge was amenable to oxidation at 600°C forming oxides of copper and selenium (Cu-Se-O). However, the dissociation of selenium from the copper oxide was made possible by sulfatation using sulfur dioxide between 450 to 600°C, resulting into the formation of CuSO₄ (s) and SeO₂ (g). Lab scale trials were carried out in vertical tubular furnace to determine the optimum roasting conditions with respect to roasting time, temperature and molar ratio of O₂:SO₂. Using these optimum conditions, selenium up to 90 wt% in the form of SeO₂ vapors could be recovered from the sludge in a large-scale commercial roaster. Roasted sludge free from the selenium and containing oxides and sulfates of copper could now be recycled in the smelter for copper recovery.

Keywords: copper, selenium, copper selenide, sludge, roasting, SeO₂

Procedia PDF Downloads 200
1055 The Effect of Restaurant Residuals on Performance of Japanese Quail

Authors: A. A. Saki, Y. Karimi, H. J. Najafabadi, P. Zamani, Z. Mostafaie

Abstract:

The restaurant residuals reasons such as competition between human and animal consumption of cereals, increasing environmental pollution and the high cost of production of livestock products is important. Therefore, in this restaurant residuals have a high nutritional value (protein and high energy) that it is possible can replace some of the poultry diets are especially Japanese quail. Today, the challenges of processing and consumption of these lesions occurring in modern industry would be confronting. Increasing costs, pressures, and problems associated with waste excretion, the need for re-evaluation and utilization of waste to livestock and poultry feed fortifies. This study aimed to investigate the effects of different levels of restaurant residuals on performance of 300 layer Japanese quails. This experiment included 5 treatments, 4 replicates, and 15 quails in each from 10 to 18 weeks age in a completely randomized design (CRD). The treatments consist of basal diet including corn and soybean meal (without residual restaurants), and treatments 2, 3, 4 and 5, includes a basal diet containing 5, 10, 15 and 20% of restaurant residuals, respectively. There were no significant effect of restaurant residuals levels on body weight (BW), feed conversion ratio (FCR), percentage of egg production (EP), egg mass (EM) between treatments (P > 0/05). However, feed intake (FI) of 5% restaurant residual was significantly higher than 20% treatment (P < 0/05). Egg weight (EW) was also higher by receiving 20% restaurant residuals compared with 10% in this respect (P < 0/05). Yolk weight (YW) of treatments containing 10 and 20% of the residual restaurant were significantly higher than control (P < 0/05). Eggs white weight (EWW) of 20 and 5% restaurants residual treatments were significantly increased compared by 10% (P < 0/05). Furthermore, EW, egg weight to shell surface area and egg surface area in 20% treatment were significantly higher than control and 10% treatment (P < 0/05). The overall results of this study have shown that restaurant residuals for laying quail diets in levels of 10 and 15 percent could be replaced with a part of the quail ration without any adverse effect.

Keywords: by-product, laying quail, performance, restaurant residuals

Procedia PDF Downloads 161
1054 Properties Soft Cheese as Diversification of Dangke: A Natural Cheese of South Sulawesi Indonesia

Authors: Ratmawati Malaka, Effendi Abustam, Kusumandari Indah Prahesti, Sudirman Baco

Abstract:

Dangke is natural cheese from Enrekang South Sulawesi, Indonesia produced through aglutination buffalo milk, cow, goat or sheep using the sap of papaya (Carica papaya). Dangke has been widely known in South Sulawesi but this soft cheese product diversification by using passion fruit juice as milk clotting agents has not been used. Passion fruit juice has a high acidity with a pH of around 4 - 4.5 and has a proteolytic enzyme, so that it can be used to agglutinate milk. The purpose of this study was to investigate the nature Dangke using passion fruit juice as coagulate milk. Dangke made by 10 lt of raw milk by heating at a temperature of 73oC with coagulant passion fruit juice (7.5% and 10%), and added 1% salt. Curd clot and then be formed using a coconut shell, is then pressed until the cheese is compact. The cheese is then observed for 28 days ripening at a temperature of about 5 ° C. Dangke then studied to violence, pH, fat levels and microstructure. Hardness is determined using CD-shear Force, pH is measured using a pH meter Hanna, and fat concentrations were analyzed with methods of proximate. Microstructure viewed using a light microscope with magnification 1000 x. The results showed that the levels of clotting material very significant influence on hardness, pH, and lipid levels. Maturation increase the hardness but lower the pH, the level of fat soft cheese with an average Dangke respectively 21.4% and 30.5% on 7.5% addition of passion fruit juice and 10%. Dangke violence is increasing with the increasing maturation time (1.38 to 3.73 kg / cm), but Dangke pH was decreased by the increase in storage maturation (5.34 to 4.1). Microktrukture cheeses coagulated with 10% of the passion fruit are very firmer and compact with a full globular fat of 7.5%. But the sensory properties of the soft cheese similar in both treatment. The manufacturing process with the addition of coagulant passion fruit juice on making Dangke affect hardness, pH, fat content and microstructure during storage at 5 ° C for 1 d - 28 d.

Keywords: dangke, passion fruits, microstructure, cheese

Procedia PDF Downloads 405
1053 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 202
1052 Cold Formed Steel Sections: Analysis, Design and Applications

Authors: A. Saha Chaudhuri, D. Sarkar

Abstract:

In steel construction, there are two families of structural members. One is hot rolled steel and another is cold formed steel. Cold formed steel section includes steel sheet, strip, plate or flat bar. Cold formed steel section is manufactured in roll forming machine by press brake or bending operation. Cold formed steel (CFS), also known as Light Gauge Steel (LGS). As cold formed steel is a sustainable material, it is widely used in green building. Cold formed steel can be recycled and reused with no degradation in structural properties. Cold formed steel structures can earn credits for green building ratings such as LEED and similar programs. Cold formed steel construction satisfies international demand for better, more efficient and affordable buildings. Cold formed steel sections are used in building, car body, railway coach, various types of equipment, storage rack, grain bin, highway product, transmission tower, transmission pole, drainage facility, bridge construction etc. Various shapes of cold formed steel sections are available, such as C section, Z section, I section, T section, angle section, hat section, box section, square hollow section (SHS), rectangular hollow section (RHS), circular hollow section (CHS) etc. In building construction cold formed steel is used as eave strut, purlin, girt, stud, header, floor joist, brace, diaphragm and covering for roof, wall and floor. Cold formed steel has high strength to weight ratio and high stiffness. Cold formed steel is non shrinking and non creeping at ambient temperature, it is termite proof and rot proof. CFS is durable, dimensionally stable and non combustible material. CFS is economical in transportation and handling. At present days cold formed steel becomes a competitive building material. In this paper all these applications related present research work are described and how the CFS can be used as blast resistant structural system that is examined.

Keywords: cold form steel sections, applications, present research review, blast resistant design

Procedia PDF Downloads 140
1051 Health Communication and the Diabetes Narratives of Key Social Media Influencers in the UK

Authors: Z. Sun

Abstract:

Health communication is essential in promoting healthy lifestyles, managing disease conditions, and eventually reducing health disparities. The key elements of successful health communication always include the development of communication strategies to engage people in thinking about their health, inform them about healthy choices, persuade them to adopt safe and healthy behaviours, and eventually achieve public health objectives. The use of 'Narrative' is recognised as a kind of health communication strategy to enhance personal and public health due to its potential persuasive effect in motivating and supporting individuals change their beliefs and behaviours by inviting them into a narrative world, breaking down their cognitive and emotional resistance and enhance their acceptance of the ideas portrayed in narratives. Meanwhile, the popularity of social media has provided a novel means of communication for both healthcare stakeholders, and a special group of active social media users (influencers) have started playing a pivotal role in providing health ‘solutions’. Such individuals are often referred to as ‘influencers’ because of their central position in the online communication system and the persuasive effect their actions may have on audiences. They may have established a positive rapport with their audience, earned trust and credibility in a specific area, and thus, their audience considers the information they delivered to be authentic and influential. To our best knowledge, to date, there is no published research that examines the effect of diabetes narratives presented by social media influencers and their impacts on health-related outcomes. The primary aim of this study is to investigate the diabetes narratives presented by social media influencers in the UK because of the new dimension they bring to health communication and the potential impact they may have on audiences' health outcomes. This study is situated within the interpretivist and narrative paradigms. A mixed methodology combining both quantitative and qualitative approaches has been adopted. Qualitative data has been derived to provide a better understanding of influencers’ personal experiences and how they construct meanings and make sense of their world, while quantitative data has been accumulated to identify key social media influencers in the UK and measure the impact of diabetes narratives on audiences. Twitter has been chosen as the social media platform to initially identify key influencers. Two groups of participants are the top 10 key social media influencers in the UK and 100 audiences of each influencer, which means a total of 1000 audiences have been invited. This paper is going to discuss, first of all, the background of the research under the context of health communication; Secondly, the necessity and contribution of this research; then, the major research questions being explored; and finally, the methods to be used.

Keywords: diabetes, health communication, narratives, social media influencers

Procedia PDF Downloads 99
1050 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.

Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling

Procedia PDF Downloads 430
1049 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 131
1048 The Diversity of Contexts within Which Adolescents Engage with Digital Media: Contributing to More Challenging Tasks for Parents and a Need for Third Party Mediation

Authors: Ifeanyi Adigwe, Thomas Van der Walt

Abstract:

Digital media has been integrated into the social and entertainment life of young children, and as such, the impact of digital media appears to affect young people of all ages and it is believed that this will continue to shape the world of young children. Since, technological advancement of digital media presents adolescents with diverse contexts, platforms and avenues to engage with digital media outside the home environment and from parents' supervision, a wide range of new challenges has further complicated the already difficult tasks for parents and altered the landscape of parenting. Despite the fact that adolescents now have access to a wide range of digital media technologies both at home and in the learning environment, parenting practices such as active, restrictive, co-use, participatory and technical mediations are important in mitigating of online risks adolescents may encounter as a result of digital media use. However, these mediation practices only focus on the home environment including digital media present in the home and may not necessarily transcend outside the home and other learning environments where adolescents use digital media for school work and other activities. This poses the question of who mediates adolescent's digital media use outside the home environment. The learning environment could be a ''loose platform'' where an adolescent can maximise digital media use considering the fact that there is no restriction in terms of content and time allotted to using digital media during school hours. That is to say that an adolescent can play the ''bad boy'' online in school because there is little or no restriction of digital media use and be exposed to online risks and play the ''good boy'' at home because of ''heavy'' parental mediation. This is the reason why parent mediation practices have been ineffective because a parent may not be able to track adolescents digital media use considering the diversity of contexts, platforms and avenues adolescents use digital media. This study argues that due to the diverse nature of digital media technology, parents may not be able to monitor the 'whereabouts' of their children in the digital space. This is because adolescent digital media usage may not only be confined to the home environment but other learning environments like schools. This calls for urgent attention on the part of teachers to understand the intricacies of how digital media continue to shape the world in which young children are developing and learning. It is, therefore, imperative for parents to liaise with the schools of their children to mediate digital media use during school hours. The implication of parents- teachers mediation practices are discussed. The article concludes by suggesting that third party mediation by teachers in schools and other learning environments should be encouraged and future research needs to consider the emergent strategy of teacher-children mediation approach and the implication for policy for both the home and learning environments.

Keywords: digital media, digital age, parent mediation, third party mediation

Procedia PDF Downloads 149
1047 Optimization of Sintering Process with Deteriorating Quality of Iron Ore Fines

Authors: Chandra Shekhar Verma, Umesh Chandra Mishra

Abstract:

Blast Furnace performance mainly depends on the quality of sinter as a major portion of iron-bearing material occupies by it hence its quality w.r.t. Tumbler Index (TI), Reducibility Index (RI) and Reduction Degradation Index (RDI) are the key performance indicators of sinter plant. Now it became very tough to maintain the desired quality with the increasing alumina (Al₂O₃) content in iron fines and study is focused on it. Alumina is a refractory material and required more heat input to fuse thereby affecting the desired sintering temperature, i.e. 1300°C. It goes in between the grain boundaries of the bond and makes it weaker. Sinter strength decreases with increasing alumina content, and weak sinter generates more fines thereby reduces the net sinter production as well as plant productivity. Presence of impurities beyond the acceptable norm: such as LOI, Al₂O₃, MnO, TiO₂, K₂O, Na₂O, Hydrates (Goethite & Limonite), SiO₂, phosphorous and zinc, has led to greater challenges in the thrust areas such as productivity, quality and cost. The ultimate aim of this study is maintaining the sinter strength even with high Al₂O without hampering the plant productivity. This study includes mineralogy test of iron fines to find out the fraction of different phases present in the ore and phase analysis of product sinter to know the distribution of different phases. Corrections were done focusing majorly on varying Al₂O₃/SiO₂ ratio, basicity: B2 (CaO/SiO₂), B3 (CaO+MgO/SiO₂) and B4 (CaO+MgO/SiO₂+Al₂O₃). The concept of Alumina / Silica ratio, B3 & B4 found to be useful. We used to vary MgO, Al₂O₃/SiO₂, B2, B3 and B4 to get the desired sinter strength even at high alumina (4.2 - 4.5%) in sinter. The study concludes with the establishment of B4, and Al₂O₃/SiO₂ ratio in between 1.53-1.60 and 0.63- 0.70 respectively and have achieved tumbler index (Drum Index) 76 plus with the plant productivity of 1.58-1.6 t/m2/hr. at JSPL, Raigarh. Study shows that despite of high alumina in sinter, its physical quality can be controlled by maintaining the above-mentioned parameters.

Keywords: Basicity-2, Basicity-3, Basicity-4, Sinter

Procedia PDF Downloads 168
1046 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 262
1045 Effect of Oxidative Stress on Glutathione Reductase Activity of Escherichia coli Clinical Isolates from Patients with Urinary Tract Infection

Authors: Fariha Akhter Chowdhury, Sabrina Mahboob, Anamika Saha, Afrin Jahan, Mohammad Nurul Islam

Abstract:

Urinary tract infection (UTI) is frequently experienced by the female population where the prevalence increases with aging. Escherichia coli, one of the most common UTI causing organisms, retains glutathione defense mechanism that aids the organism to withstand the harsh physiological environment of urinary tract, host oxidative immune response and even to affect antibiotic-mediated cell death and the emergence of resistance. In this study, we aimed to investigate the glutathione reductase activity of uropathogenic E. coli (UPEC) by observing the reduced glutathione (GSH) level alteration under stressful condition. Urine samples of 58 patients with UTI were collected. Upon isolation and identification, 88% of the samples presented E. coli as UTI causing organism among which randomly selected isolates (n=9), obtained from urine samples of female patients, were considered for this study. E. coli isolates were grown under normal and stressful conditions where H₂O₂ was used as the stress-inducing agent. GSH level estimation of the isolates in both conditions was carried out based on the colorimetric measurement of 5,5'-dithio-bis (2-nitrobenzoic acid) (DTNB) and GSH reaction product using microplate reader assay. The GSH level of isolated E. coli sampled from adult patients decreased under stress compared to normal condition (p = 0.011). On the other hand, GSH production increased markedly in samples that were collected from elderly subjects (p = 0.024). A significant partial correlation between age and change of GSH level was found as well (p = 0.007). This study may help to reveal ways for better understanding of E. coli pathogenesis of UTI prevalence in elderly patients.

Keywords: Escherichia coli, glutathione reductase activity, oxidative stress, reduced glutathione (GSH), urinary tract infection (UTI)

Procedia PDF Downloads 318
1044 Scalable UI Test Automation for Large-scale Web Applications

Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani

Abstract:

This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.

Keywords: aws, elastic container service, scalability, serverless, ui automation test

Procedia PDF Downloads 97
1043 Structural and Binding Studies of Peptidyl-tRNA Hydrolase from Pseudomonas aeruginosa Provide a Platform for the Structure Based Inhibitor Design against Peptidyl-tRNA Hydrolase

Authors: Sujata Sharma, Avinash Singh, Lovely Gautam, Pradeep Sharma, Mau Sinha, Asha Bhushan, Punit Kaur, Tej P. Singh

Abstract:

Peptidyl-tRNA hydrolase (Pth) Pth is an essential bacterial enzyme that catalyzes the release of free tRNA and peptide moeities from peptidyl tRNAs during stalling of protein synthesis. In order to design inhibitors of Pth from Pseudomonas aeruginosa (PaPth), we have determined the structures of PaPth in its native state and in the bound states with two compounds, amino acylate-tRNA analogue (AAtA) and 5-azacytidine (AZAC). The peptidyl-tRNA hydrolase gene from Pseudomonas aeruginosa was amplified by Phusion High-Fidelity DNA Polymerase using forward and reverse primers, respectively. The E. coliBL21 (λDE3) strain was used for expression of the recombinant peptidyl-tRNA hydrolase from Pseudomonas aeruginosa. The protein was purified using a Ni-NTA superflow column. The crystallization experiments were carried out using hanging drop vapour diffusion method. The crystals diffracted to 1.50 Å resolution. The data were processed using HKL-2000. The polypeptide chain of PaPth consists of 194 amino acid residues from Met1 to Ala194. The centrally located β-structure is surrounded by α-helices from all sides except the side that has entrance to the substrate binding site. The structures of the complexes of PaPth with AAtA and AZAC showed the ligands bound to PaPth in the substrate binding cleft and interacted with protein atoms extensively. The residues that formed intermolecular hydrogen bonds with the atoms of AAtA included Asn12, His22, Asn70, Gly113, Asn116, Ser148, and Glu161 of the symmetry related molecule. The amino acids that were involved in hydrogen bonded interactions in case of AZAC included, His22, Gly113, Asn116, and Ser148. As indicated by fittings of two ligands and the number of interactions made by them with protein atoms, AAtA appears to be a more compatible with the structure of the substrate binding cleft. However, there is a further scope to achieve a better stacking than that of O-tyrosyl moiety because it is not still ideally stacked. These observations about the interactions between the protein and ligands have provided the information about the mode of binding of ligands, nature and number of interactions. This information may be useful for the design of tight inhibitors of Pth enzymes.

Keywords: peptidyl tRNA hydrolase, Acinetobacter baumannii, Pth enzymes, O-tyrosyl

Procedia PDF Downloads 426
1042 Supply Chain Collaboration Comparison Practices between Developed and Developing Countries

Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz

Abstract:

In the industrial sector the collaboration along the supply chain is key especially in order to develop product, production methods or process innovations. The access to resources and knowledge not being available inside the company, the achievement of cost competitive solutions, the reduction of the time required to innovate are some of the benefits linked with the collaboration with suppliers. The big industrial manufacturers have a long tradition to collaborate with their suppliers to develop new products in the developed countries. Since they have increased their global supply chains and global sourcing activities, the objective of the research is to analyse if the same best practices, way of working, experiences, information technology tools, governance methodologies are applied when collaborating with suppliers in the developed world or in developing countries. Most of the current research focuses to analyse the Supply Chain Collaboration in the developed countries and in recent years the number of publications related to the Supply Chain Collaboration in developing countries has increased, but there is still a lack of research comparing both and analysing the similarities, differences and key success factors among the Supply Chain Collaboration practices in developed and developing countries. With this gap in mind, the research under preparation will focus on the following goals: -Identify the most important elements required for a successful supply chain collaboration in the developed and developing countries. -Set up the optimal governance framework to manage the supply chain collaboration in the developed and developing countries. -Define some recommendations about required improvements in the current supply chain collaboration business relationship practices in place. Following the case methodology we will analyze the way manufacturers and suppliers collaborate in the development of new products, production methods or process innovations and in the set up of new global supply chains in two industries with different level of technology intensity and collaboration history being the automotive and aerospace industries.

Keywords: global supply chain networks, Supply Chain Collaboration, supply chain governance, supply chain performance

Procedia PDF Downloads 592
1041 Engineered Bio-Coal from Pressed Seed Cake for Removal of 2, 4, 6-Trichlorophenol with Parametric Optimization Using Box–Behnken Method

Authors: Harsha Nagar, Vineet Aniya, Alka Kumari, Satyavathi B.

Abstract:

In the present study, engineered bio-coal was produced from pressed seed cake, which otherwise is non-edible in origin. The production process involves a slow pyrolysis wherein, based on the optimization of process parameters; a substantial reduction in H/C and O/C of 77% was achieved with respect to the original ratio of 1.67 and 0.8, respectively. The bio-coal, so the product was found to have a higher heating value of 29899 kJ/kg with surface area 17 m²/g and pore volume of 0.002 cc/g. The functional characterization of bio-coal and its subsequent modification was carried out to enhance its active sites, which were further used as an adsorbent material for removal of 2,4,6-Trichlorophenol (2,4,6-TCP) herbicide from the aqueous stream. The point of zero charge for the bio-coal was found to be pH < 3 where its surface is positively charged and attracts anions resulting in the maximum 2, 4, 6-TCP adsorption at pH 2.0. The parametric optimization of the adsorption process was studied based on the Box-Behken design with the desirability approach. The results showed optimum values of adsorption efficiency of 74.04% and uptake capacity of 118.336 mg/g for an initial metal concentration of 250 mg/l and particle size of 0.12 mm at pH 2.0 and 1 g/L of bio-coal loading. Negative Gibbs free energy change values indicated the feasibility of 2,4,6-TCP adsorption on biochar. Decreasing the ΔG values with the rise in temperature indicated high favourability at low temperatures. The equilibrium modeling results showed that both isotherms (Langmuir and Freundlich) accurately predicted the equilibrium data, which may be attributed to the different affinity of the functional groups of bio-coal for 2,4,6-TCP removal. The possible mechanism for 2,4,6-TCP adsorption is found to be physisorption (pore diffusion, p*_p electron donor-acceptor interaction, H-bonding, and van der Waals dispersion forces) and chemisorption (phenolic and amine groups chemical bonding) based on the kinetics data modeling.

Keywords: engineered biocoal, 2, 4, 6-trichlorophenol, box behnken design, biosorption

Procedia PDF Downloads 112
1040 Orbit Determination from Two Position Vectors Using Finite Difference Method

Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.

Abstract:

An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.

Keywords: finite difference method, grid generation, NavIC system, orbit perturbation

Procedia PDF Downloads 77
1039 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 280
1038 Production of Vermiwash from Medicinal Plants and Its Potential Use as Fungicide against the Alternaria Alternata (fr.) Keissl. Affecting Cucumber (Cucumis sativus L.) in Guyana

Authors: Abdullah Ansari, Sinika Rambaran, Sirpaul Jaikishun

Abstract:

Vermiwash could be used to enhance plant productivity and resistance to some harmful plant pathogens, as well as provide benefit through the disposal of waste matter. Alternaria rot caused by the fungus Alternaria alternata (Fr.) Keissl., is a common soil-borne pathogen that results in postharvest fruit rot of cucumbers, peppers and other cash crops. The production and distribution of Cucumis sativus L. (cucumber) could be severely affected by Alternaria rot. Fungicides are the traditional treatment however; they are not only expensive but can also cause environmental and health problems. Vermiwash was prepared from various medicinal plants (Ocimum tenuiflorum L. {Tulsi}, Azadirachta indica A. Juss. {neem}, Cymbopogon citratus (DC. ex Nees) Stapf. {lemon grass} and Oryza sativa L. {paddy straw} and applied, in vitro, to A. alternata to investigate their effectiveness as organic alternatives to traditional fungicides. All of the samples of vermiwash inhibited the growth of A. alternata. The inhibitive effects on the fungus appeared most effective when A. indica and O. tenuiflorum were used in the production of the vermiwash. Using the serial dilution method, vermiwash from O. tenuiflorum showed the highest percent of inhibition (93.2%), followed by C. citratus (74.7%), A. indica (68.7%), O. sativa, combination, and combination without worms. Using the sterile disc diffusion method, all of the samples produced zones of inhibition against A. alternata. Vermiwash from A. indica produced a zone of inhibition, averaging 15.3mm, followed by O. tenuiflorum (14.0mm), combination without worms, combination, C. citratus and O. sativa. Nystatin produced a zone of inhibition of 10mm. The results indicate that vermiwash is not simply an organic alternative to more traditional chemical fungicides, but it may in fact be a better and more effective product in treating certain fungal plant infections, particularly A. alternata.

Keywords: vermiwash, earthworms, soil, bacteria, alternaria alternata, antifungal, antibacterial

Procedia PDF Downloads 245
1037 Analysis of Human Toxicity Potential of Major Building Material Production Stage Using Life Cycle Assessment

Authors: Rakhyun Kim, Sungho Tae

Abstract:

Global environmental issues such as abnormal weathers due to global warming, resource depletion, and ecosystem distortions have been escalating due to rapid increase of population growth, and expansion of industrial and economic development. Accordingly, initiatives have been implemented by many countries to protect the environment through indirect regulation methods such as Environmental Product Declaration (EPD), in addition to direct regulations such as various emission standards. Following this trend, life cycle assessment (LCA) techniques that provide quantitative environmental information, such as Human Toxicity Potential (HTP), for buildings are being developed in the construction industry. However, at present, the studies on the environmental database of building materials are not sufficient to provide this support adequately. The purpose of this study is to analysis human toxicity potential of major building material production stage using life cycle assessment. For this purpose, the theoretical consideration of the life cycle assessment and environmental impact category was performed and the direction of the study was set up. That is, the major material in the global warming potential view was drawn against the building and life cycle inventory database was selected. The classification was performed about 17 kinds of substance and impact index, such as human toxicity potential, that it specifies in CML2001. The environmental impact of analysis human toxicity potential for the building material production stage was calculated through the characterization. Meanwhile, the environmental impact of building material in the same category was analyze based on the characterization impact which was calculated in this study. In this study, establishment of environmental impact coefficients of major building material by complying with ISO 14040. Through this, it is believed to effectively support the decisions of stakeholders to improve the environmental performance of buildings and provide a basis for voluntary participation of architects in environment consideration activities.

Keywords: human toxicity potential, major building material, life cycle assessment, production stage

Procedia PDF Downloads 129