Search results for: Sundar Industrial Estate
2584 A High Content Screening Platform for the Accurate Prediction of Nephrotoxicity
Authors: Sijing Xiong, Ran Su, Lit-Hsin Loo, Daniele Zink
Abstract:
The kidney is a major target for toxic effects of drugs, industrial and environmental chemicals and other compounds. Typically, nephrotoxicity is detected late during drug development, and regulatory animal models could not solve this problem. Validated or accepted in silico or in vitro methods for the prediction of nephrotoxicity are not available. We have established the first and currently only pre-validated in vitro models for the accurate prediction of nephrotoxicity in humans and the first predictive platforms based on renal cells derived from human pluripotent stem cells. In order to further improve the efficiency of our predictive models, we recently developed a high content screening (HCS) platform. This platform employed automated imaging in combination with automated quantitative phenotypic profiling and machine learning methods. 129 image-based phenotypic features were analyzed with respect to their predictive performance in combination with 44 compounds with different chemical structures that included drugs, environmental and industrial chemicals and herbal and fungal compounds. The nephrotoxicity of these compounds in humans is well characterized. A combination of chromatin and cytoskeletal features resulted in high predictivity with respect to nephrotoxicity in humans. Test balanced accuracies of 82% or 89% were obtained with human primary or immortalized renal proximal tubular cells, respectively. Furthermore, our results revealed that a DNA damage response is commonly induced by different PTC-toxicants with diverse chemical structures and injury mechanisms. Together, the results show that the automated HCS platform allows efficient and accurate nephrotoxicity prediction for compounds with diverse chemical structures.Keywords: high content screening, in vitro models, nephrotoxicity, toxicity prediction
Procedia PDF Downloads 3122583 Predicting Foreign Direct Investment of IC Design Firms from Taiwan to East and South China Using Lotka-Volterra Model
Authors: Bi-Huei Tsai
Abstract:
This work explores the inter-region investment behaviors of integrated circuit (IC) design industry from Taiwan to China using the amount of foreign direct investment (FDI). According to the mutual dependence among different IC design industrial locations, Lotka-Volterra model is utilized to explore the FDI interactions between South and East China. Effects of inter-regional collaborations on FDI flows into China are considered. Evolutions of FDIs into South China for IC design industry significantly inspire the subsequent FDIs into East China, while FDIs into East China for Taiwan’s IC design industry significantly hinder the subsequent FDIs into South China. The supply chain along IC industry includes IC design, manufacturing, packing and testing enterprises. I C manufacturing, packaging and testing industries depend on IC design industry to gain advanced business benefits. The FDI amount from Taiwan’s IC design industry into East China is the greatest among the four regions: North, East, Mid-West and South China. The FDI amount from Taiwan’s IC design industry into South China is the second largest. If IC design houses buy more equipment and bring more capitals in South China, those in East China will have pressure to undertake more FDIs into East China to maintain the leading position advantages of the supply chain in East China. On the other hand, as the FDIs in East China rise, the FDIs in South China will successively decline since capitals have concentrated in East China. Prediction of Lotka-Volterra model in FDI trends is accurate because the industrial interactions between the two regions are included. Finally, this work confirms that the FDI flows cannot reach a stable equilibrium point, so the FDI inflows into East and South China will expand in the future.Keywords: Lotka-Volterra model, foreign direct investment, competitive, Equilibrium analysis
Procedia PDF Downloads 3632582 Performance Evaluation of Solid Lubricant Characteristics at Different Sliding Conditions
Authors: Suresh Kumar Reddy Narala, Rakesh Kumar Gunda
Abstract:
In modern industry, mechanical parts are subjected to friction and wear, leading to heat generation, which affects the reliability, life and power consumption of machinery. To overcome the tribological losses due to friction and wear, a significant portion of lubricant with high viscous properties allows very smooth relative motion between two sliding surfaces. Advancement in modern tribology has facilitated the use of applying solid lubricants in various industrial applications. Solid lubricant additives with high viscous thin film formation between the sliding surfaces can adequately wet and adhere to a work surface. In the present investigation, an attempt has been made to investigate and evaluate the tribological studies of various solid lubricants like MoS¬2, graphite, and boric acid at different sliding conditions. The base oil used in this study was SAE 40 oil with a viscosity of 220 cSt at 400C. The tribological properties were measured on pin-on-disc tribometer. An experimental set-up has been developed for effective supply of solid lubricants to the pin-disc interface zone. The results obtained from the experiments show that the friction coefficient increases with increase in applied load for all the considered environments. The tribological properties with MoS2 solid lubricant exhibit larger load carrying capacity than that of graphite and boric acid. The present research work also contributes to the understanding of the behavior of film thickness distribution of solid lubricant using potential contact technique under different sliding conditions. The results presented in this research work are expected to form a scientific basis for selecting the best solid lubricant in various industrial applications for possible minimization of friction and wear.Keywords: friction, wear, temperature, solid lubricant
Procedia PDF Downloads 3482581 Characterization of Biocomposites Based on Mussel Shell Wastes
Authors: Suheyla Kocaman, Gulnare Ahmetli, Alaaddin Cerit, Alize Yucel, Merve Gozukucuk
Abstract:
Shell wastes represent a considerable quantity of byproducts in the shellfish aquaculture. From the viewpoint of ecofriendly and economical disposal, it is highly desirable to convert these residues into high value-added products for industrial applications. So far, the utilization of shell wastes was confined at relatively lower levels, e.g. wastewater decontaminant, soil conditioner, fertilizer constituent, feed additive and liming agent. Shell wastes consist of calcium carbonate and organic matrices, with the former accounting for 95-99% by weight. Being the richest source of biogenic CaCO3, shell wastes are suitable to prepare high purity CaCO3 powders, which have been extensively applied in various industrial products, such as paper, rubber, paints and pharmaceuticals. Furthermore, the shell waste could be further processed to be the filler of polymer composites. This paper presents a study on the potential use of mussel shell waste as biofiller to produce the composite materials with different epoxy matrices, such as bisphenol-A type, CTBN modified and polyurethane modified epoxy resins. Morphology and mechanical properties of shell particles reinforced epoxy composites were evaluated to assess the possibility of using it as a new material. The effects of shell particle content on the mechanical properties of the composites were investigated. It was shown that in all composites, the tensile strength and Young’s modulus values increase with the increase of mussel shell particles content from 10 wt% to 50 wt%, while the elongation at break decreased, compared to pure epoxy resin. The highest Young’s modulus values were determined for bisphenol-A type epoxy composites.Keywords: biocomposite, epoxy resin, mussel shell, mechanical properties
Procedia PDF Downloads 3142580 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 942579 Solvent-Aided Dispersion of Tannic Acid to Enhance Flame Retardancy of Epoxy
Authors: Matthew Korey, Jeffrey Youngblood, John Howarter
Abstract:
Background and Significance: Tannic acid (TA) is a bio-based high molecular weight organic, aromatic molecule that has been found to increase thermal stability and flame retardancy of many polymer matrices when used as an additive. Although it is biologically sourced, TA is a pollutant in industrial wastewater streams, and there is a desire to find applications in which to downcycle this molecule after extraction from these streams. Additionally, epoxy thermosets have revolutionized many industries, but are too flammable to be used in many applications without additives which augment their flame retardancy (FR). Many flame retardants used in epoxy thermosets are synthesized from petroleum-based monomers leading to significant environmental impacts on the industrial scale. Many of these compounds also have significant impacts on human health. Various bio-based modifiers have been developed to improve the FR of the epoxy resin; however, increasing FR of the system without tradeoffs with other properties has proven challenging, especially for TA. Methodologies: In this work, TA was incorporated into the thermoset by use of solvent-exchange using methyl ethyl ketone, a co-solvent for TA, and epoxy resin. Samples were then characterized optically (UV-vis spectroscopy and optical microscopy), thermally (thermogravimetric analysis and differential scanning calorimetry), and for their flame retardancy (mass loss calorimetry). Major Findings: Compared to control samples, all samples were found to have increased thermal stability. Further, the addition of tannic acid to the polymer matrix by the use of solvent greatly increased the compatibility of the additive in epoxy thermosets. By using solvent-exchange, the highest loading level of TA found in literature was achieved in this work (40 wt%). Conclusions: The use of solvent-exchange shows promises for circumventing the limitations of TA in epoxy.Keywords: sustainable, flame retardant, epoxy, tannic acid
Procedia PDF Downloads 1302578 Fatigue Influence on the Residual Stress State in Shot Peened Duplex Stainless Steel
Authors: P. D. Pedrosa, J. M. A. Rebello, M. P. Cindra Fonseca
Abstract:
Duplex stainless steels (DSS) exhibit a biphasic microstructure consisting of austenite and delta ferrite. Their high resistance to oxidation, and corrosion, even in H2S containing environments, allied to low cost when compared to conventional stainless steel, are some properties which make this material very attractive for several industrial applications. However, several of these industrial applications imposes cyclic loading to the equipments and in consequence fatigue damage needs to be a concern. A well-known way of improving the fatigue life of a component is by introducing compressive residual stress in its surface. Shot peening is an industrial working process which brings the material directly beneath component surface in a high mechanical compressive state, so inhibiting fatigue crack initiation. However, one must take into account the fact that the cyclic loading itself can reduce and even suppress these residual stresses, thus having undesirable consequences in the process of improving fatigue life by the introduction of compressive residual stresses. In the present work, shot peening was used to introduce residual stresses in several DSS samples. These were thereafter submitted to three different fatigue regimes: low, medium and high cycle fatigue. The evolution of the residual stress during loading were then examined on both surface and subsurface of the samples. It was used the DSS UNS S31803, with microstructure composed of 49% austenite and 51% ferrite. The treatment of shot peening was accomplished by the application of blasting in two Almen intensities of 0.25 and 0.39A. The residual stresses were measured by X-ray diffraction using the double exposure method and a portable equipment with CrK radiation and the (211) diffracting plane for the austenite phase and the (220) plane for the ferrite phase. It is known that residual stresses may arise when two regions of the same material experienced different degrees of plastic deformation. When these regions are separated in respect to each other on a scale that is large compared to the material's microstructure they are called macro stresses. In contrast, microstresses can largely vary over distances which are small comparable to the scale of the material's microstructure and must balance zero between the phases present. In the present work, special attention will be paid to the measurement of residual microstresses. Residual stress measurements were carried out in test pieces submitted to low, medium and high-cycle fatigue, in both longitudinal and transverse direction of the test pieces. It was found that after shot peening, the residual microstress is tensile in the austenite and compressive in the ferrite phases. It was hypothesized that the hardening behavior of the austenite after shot peening was probably due to its higher nitrogen content. Fatigue cycling can effectively change this stress state but this effect was found to be dependent of the shot peening intensity was well as the fatigue range.Keywords: residual stresses, fatigue, duplex steel, shot peening
Procedia PDF Downloads 2282577 A Study Problem and Needs Compare the Held of the Garment Industries in Nonthaburi and Bangkok Area
Authors: Thepnarintra Praphanphat
Abstract:
The purposes of this study were to investigate garment industry’s condition, problems, and need for assistance. The population of the study was 504 managers or managing directors of garment establishments finished apparel industrial manager and permission of the Department of Industrial Works 28, Ministry of Industry until January 1, 2012. In determining the sample size with the opening of the Taro Yamane finished at 95% confidence level is ± 5% deviation was 224 managers. Questionnaires were used to collect the data. Percentage, frequency, arithmetic mean, standard deviation, t-test, ANOVA, and LSD were used to analyze the data. It was found that most establishments were of a large size, operated in a form of limited company for more than 15 years most of which produced garments for working women. All investment was made by Thai people. The products were made to order and distributed domestically and internationally. The total sale of the year 2010, 2011, and 2012 was almost the same. With respect to the problems of operating the business, the study indicated, as a whole, by- aspects, and by-items, that they were at a high level. The comparison of the level of problems of operating garment business as classified by general condition showed that problems occurring in business of different sizes were, as a whole, not different. In taking aspects into consideration, it was found that the level of problem in relation to production was different; medium establishments had more problems in production than those of small and large sizes. According to the by-items analysis, five problems were found different; namely, problems concerning employees, machine maintenance, number of designers, and price competition. Such problems in the medium establishments were at a higher level than those in the small and large establishments. Regarding business age, the examination yielded no differences as a whole, by-aspects, and by-items. The statistical significance level of this study was set at .05.Keywords: garment industry, garment, fashion, competitive enhancement project
Procedia PDF Downloads 1872576 Fluctuations in Radical Approaches to State Ownership of the Means of Production Over the Twentieth Century
Authors: Tom Turner
Abstract:
The recent financial crisis in 2008 and the growing inequality in developed industrial societies would appear to present significant challenges to capitalism and the free market. Yet there have been few substantial mainstream political or economic challenges to the dominant capitalist and market paradigm to-date. There is no dearth of critical and theoretical (academic) analyses regarding the prevailing systems failures. Yet despite the growing inequality in the developed industrial societies and the financial crisis in 2008 few commentators have advocated the comprehensive socialization or state ownership of the means of production to our knowledge – a core principle of radical Marxism in the 19th and early part of the 20th century. Undoubtedly the experience in the Soviet Union and satellite countries in the 20th century has cast a dark shadow over the notion of centrally controlled economies and state ownership of the means of production. In this paper, we explore the history of a doctrine advocating the socialization or state ownership of the means of production that was central to Marxism and socialism generally. Indeed this doctrine provoked an intense and often acrimonious debate especially for left-wing parties throughout the 20th century. The debate within the political economy tradition has historically tended to divide into a radical and a revisionist approach to changing or reforming capitalism. The radical perspective views the conflict of interest between capital and labor as a persistent and insoluble feature of a capitalist society and advocates the public or state ownership of the means of production. Alternatively, the revisionist perspective focuses on issues of distribution rather than production and emphasizes the possibility of compromise between capital and labor in capitalist societies. Over the 20th century, the radical perspective has faded and even the social democratic revisionist tradition has declined in recent years. We conclude with the major challenges that confront both the radical and revisionist perspectives in the development of viable policy agendas in mature developed democratic societies. Additionally, we consider whether state ownership of the means of production still has relevance in the 21st century and to what extent state ownership is off the agenda as a political issue in the political mainstream in developed industrial societies. A central argument in the paper is that state ownership of the means of production is unlikely to feature as either a practical or theoretical solution to the problems of capitalism post the financial crisis among mainstream political parties of the left. Although the focus here is solely on the shifting views of the radical and revisionist socialist perspectives in the western European tradition the analysis has relevance for the wider socialist movement.Keywords: sate ownership, ownership means of production, radicals, revisionists
Procedia PDF Downloads 1192575 Challenges, Practices, and Opportunities of Knowledge Management in Industrial Research Institutes: Lessons Learned from Flanders Make
Authors: Zhenmin Tao, Jasper De Smet, Koen Laurijssen, Jeroen Stuyts, Sonja Sioncke
Abstract:
Today, the quality of knowledge management (KM)become one of the underpinning factors in the success of an organization, as it determines the effectiveness of capitalizing the organization’s knowledge. Overall, KMin an organization consists of five aspects: (knowledge) creation, validation, presentation, distribution, and application. Among others, KM in research institutes is considered as the cornerstone as their activities cover all five aspects. Furthermore, KM in a research institute facilitates the steering committee to envision the future roadmap, identify knowledge gaps, and make decisions on future research directions. Likewise, KMis even more challenging in industrial research institutes. From a technical perspective, technology advancement in the past decades calls for combinations of breadth and depth in expertise that poses challenges in talent acquisition and, therefore, knowledge creation. From a regulatory perspective, the strict intellectual property protection from industry collaborators and/or the contractual agreements made by possible funding authoritiesform extra barriers to knowledge validation, presentation, and distribution. From a management perspective, seamless KM activities are only guaranteed by inter-disciplinary talents that combine technical background knowledge, management skills, and leadership, let alone international vision. From a financial perspective, the long feedback period of new knowledge, together with the massive upfront investment costs and low reusability of the fixed assets, lead to low RORC (return on research capital) that jeopardize KM practice. In this study, we aim to address the challenges, practices, and opportunitiesof KM in Flanders Make – a leading European research institute specialized in the manufacturing industry. In particular, the analyses encompass an internal KM project which involves functionalities ranging from management to technical domain experts. This wide range of functionalities provides comprehensive empirical evidence on the challenges and practices w.r.t.the abovementioned KMaspects. Then, we ground our analysis onto the critical dimensions ofKM–individuals, socio‐organizational processes, and technology. The analyses have three steps: First, we lay the foundation and define the environment of this study by briefing the KM roles played by different functionalities in Flanders Make. Second, we zoom in to the CoreLab MotionS where the KM project is located. In this step, given the technical domains covered by MotionS products, the challenges in KM will be addressed w.r.t. the five KM aspects and three critical dimensions. Third, by detailing the objectives, practices, results, and limitations of the MotionSKMproject, we justify the practices and opportunities derived in the execution ofKMw.r.t. the challenges addressed in the second step. The results of this study are twofold: First, a KM framework that consolidates past knowledge is developed. A library based on this framework can, therefore1) overlook past research output, 2) accelerate ongoing research activities, and 3) envision future research projects. Second, the challenges inKM on both individual (actions) level and socio-organizational level (e.g., interactions between individuals)are identified. By doing so, suggestions and guidelines will be provided in KM in the context of industrial research institute. To this end, the results in this study are reflected towards the findings in existing literature.Keywords: technical knowledge management framework, industrial research institutes, individual knowledge management, socio-organizational knowledge management.
Procedia PDF Downloads 1162574 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 2902573 Training Undergraduate Engineering Students in Robotics and Automation through Model-Based Design Training: A Case Study at Assumption University of Thailand
Authors: Sajed A. Habib
Abstract:
Problem-based learning (PBL) is a student-centered pedagogy that originated in the medical field and has also been used extensively in other knowledge disciplines with recognized advantages and limitations. PBL has been used in various undergraduate engineering programs with mixed outcomes. The current fourth industrial revolution (digital era or Industry 4.0) has made it essential for many science and engineering students to receive effective training in advanced courses such as industrial automation and robotics. This paper presents a case study at Assumption University of Thailand, where a PBL-like approach was used to teach some aspects of automation and robotics to selected groups of undergraduate engineering students. These students were given some basic level training in automation prior to participating in a subsequent training session in order to solve technical problems with increased complexity. The participating students’ evaluation of the training sessions in terms of learning effectiveness, skills enhancement, and incremental knowledge following the problem-solving session was captured through a follow-up survey consisting of 14 questions and a 5-point scoring system. From the most recent training event, an overall 70% of the respondents indicated that their skill levels were enhanced to a much greater level than they had had before the training, whereas 60.4% of the respondents from the same event indicated that their incremental knowledge following the session was much greater than what they had prior to the training. The instructor-facilitator involved in the training events suggested that this method of learning was more suitable for senior/advanced level students than those at the freshmen level as certain skills to effectively participate in such problem-solving sessions are acquired over a period of time, and not instantly.Keywords: automation, industry 4.0, model-based design training, problem-based learning
Procedia PDF Downloads 1342572 An Industrial Steady State Sequence Disorder Model for Flow Controlled Multi-Input Single-Output Queues in Manufacturing Systems
Authors: Anthony John Walker, Glen Bright
Abstract:
The challenge faced by manufactures, when producing custom products, is that each product needs exact components. This can cause work-in-process instability due to component matching constraints imposed on assembly cells. Clearing type flow control policies have been used extensively in mediating server access between multiple arrival processes. Although the stability and performance of clearing policies has been well formulated and studied in the literature, the growth in arrival to departure sequence disorder for each arriving job, across a serving resource, is still an area for further analysis. In this paper, a closed form industrial model has been formulated that characterizes arrival-to-departure sequence disorder through stable manufacturing systems under clearing type flow control policy. Specifically addressed are the effects of sequence disorder imposed on a downstream assembly cell in terms of work-in-process instability induced through component matching constraints. Results from a simulated manufacturing system show that steady state average sequence disorder in parallel upstream processing cells can be balanced in order to decrease downstream assembly system instability. Simulation results also show that the closed form model accurately describes the growth and limiting behavior of average sequence disorder between parts arriving and departing from a manufacturing system flow controlled via clearing policy.Keywords: assembly system constraint, custom products, discrete sequence disorder, flow control
Procedia PDF Downloads 1782571 Application of Industrial Ergonomics in Vehicle Service System Design
Authors: Zhao Yu, Zhi-Nan Zhang
Abstract:
More and more interactive devices are used in the transportation service system. Our mobile phones, on-board computers, and Head-Up Displays (HUDs) can all be used as the tools of the in-car service system. People can access smart systems with different terminals such as mobile phones, computers, pads and even their cars and watches. Different forms of terminals bring the different quality of interaction by the various human-computer Interaction modes. The new interactive devices require good ergonomics design at each stage of the whole design process. According to the theory of human factors and ergonomics, this paper compared three types of interactive devices by four driving tasks. Forty-eight drivers were chosen to experience these three interactive devices (mobile phones, on-board computers, and HUDs) by a simulate driving process. The subjects evaluated ergonomics performance and subjective workload after the process. And subjects were encouraged to support suggestions for improving the interactive device. The result shows that different interactive devices have different advantages in driving tasks, especially in non-driving tasks such as information and entertainment fields. Compared with mobile phones and onboard groups, the HUD groups had shorter response times in most tasks. The tasks of slow-up and the emergency braking are less accurate than the performance of a control group, which may because the haptic feedback of these two tasks is harder to distinguish than the visual information. Simulated driving is also helpful in improving the design of in-vehicle interactive devices. The paper summarizes the ergonomics characteristics of three in-vehicle interactive devices. And the research provides a reference for the future design of in-vehicle interactive devices through an ergonomic approach to ensure a good interaction relationship between the driver and the in-vehicle service system.Keywords: human factors, industrial ergonomics, transportation system, usability, vehicle user interface
Procedia PDF Downloads 1392570 Extraction, Synthesis, Characterization and Antioxidant Properties of Oxidized Starch from an Abundant Source in Nigeria
Authors: Okafor E. Ijeoma, Isimi C. Yetunde, Okoh E. Judith, Kunle O. Olobayo, Emeje O. Martins
Abstract:
Starch has gained interest as a renewable and environmentally compatible polymer due to the increase in its use. However, starch by itself could not be satisfactorily applied in industrial processes due to some inherent disadvantages such as its hydrophilic character, poor mechanical properties, its inability to withstand processing conditions such as extreme temperatures, diverse pH, high shear rate, freeze-thaw variation and dimensional stability. The range of physical properties of parent starch can be enlarged by chemical modification which invariably enhances their use in a number of applications found in industrial processes and food manufacture. In this study, Manihot esculentus starch was subjected to modification by oxidation. Fourier Transmittance Infra- Red (FTIR) and Raman spectroscopies were used to confirm the synthesis while Scanning Electron Microscopy (SEM) and X- Ray Diffraction (XRD) were used to characterize the new polymer. DPPH (2, 2-diphenyl-1-picryl-hydrazyl-hydrate) free radical assay was used to determine the antioxidant property of the oxidized starch. Our results show that the modification had no significant effect on the foaming capacity as well as on the emulsion capacity. Scanning electron microscopy revealed that oxidation did not alter the predominantly circular-shaped starch granules, while the X-ray pattern of both starch, native and modified were similar. FTIR results revealed a new band at 3007 and 3283cm-1. Differential scanning calorimetry returned two new endothermic peaks in the oxidized starch with an improved gelation capacity and increased enthalpy of gelatinization. The IC50 of oxidized starch was notably higher than that of the reference standard, ascorbic acid.Keywords: antioxidant activity, DPPH, M. esculentus, oxidation, starch
Procedia PDF Downloads 2982569 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation
Authors: Samuel Ahamefula Mba
Abstract:
Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation
Procedia PDF Downloads 942568 Chemicals to Remove and Prevent Biofilm
Authors: Cynthia K. Burzell
Abstract:
Aequor's Founder, a Marine and Medical Microbiologist, discovered novel, non-toxic chemicals in the ocean that uniquely remove biofilm in minutes and prevent its formation for days. These chemicals and over 70 synthesized analogs that Aequor developed can replace thousands of toxic biocides used in consumer and industrial products and, as new drug candidates, kill biofilm-forming bacteria and fungi Superbugs -the antimicrobial-resistant (AMR) pathogens for which there is no cure. Cynthia Burzell, PhD., is a Marine and Medical Microbiologist studying natural mechanisms that inhibit biofilm formation on surfaces in contact with water. In 2002, she discovered a new genus and several new species of marine microbes that produce small molecules that remove biofilm in minutes and prevent its formation for days. The molecules include new antimicrobials that can replace thousands of toxic biocides used in consumer and industrial products and can be developed into new drug candidates to kill the biofilm-forming bacteria and fungi -- including the antimicrobial-resistant (AMR) Superbugs for which there is no cure. Today, Aequor has over 70 chemicals that are divided into categories: (1) Novel natural chemicals. Lonza validated that the primary natural chemical removed biofilm in minutes and stated: "Nothing else known can do this at non-toxic doses." (2) Specialty chemicals. 25 of these structural analogs are already approved under the U.S. Environmental Protection Agency (EPA)'s Toxic Substances Control Act, certified as "green" and available for immediate sale. These have been validated for the following agro-industrial verticals: (a) Surface cleaners: The U.S. Department of Agriculture validated that low concentrations of Aequor's formulations provide deep cleaning of inert, nano and organic surfaces and materials; (b) Water treatments: NASA validated that one dose of Aequor's treatment in the International Space Station's water reuse/recycling system lasted 15 months without replenishment. DOE validated that our treatments lower energy consumption by over 10% in buildings and industrial processes. Future validations include pilot projects with the EPA to test efficacy in hospital plumbing systems. (c) Algae cultivation and yeast fermentation: The U.S. Department of Energy (DOE) validated that Aequor's treatment boosted biomass of renewable feedstocks by 40% in half the time -- increasing the profitability of biofuels and biobased co-products. DOE also validated increased yields and crop protection of algae under cultivation in open ponds. A private oil and gas company validated decontamination of oilfield water. (3) New structural analogs. These kill Gram-negative and Gram-positive bacteria and fungi alone, in combinations with each other, and in combination with low doses of existing, ineffective antibiotics (including Penicillin), "potentiating" them to kill AMR pathogens at doses too low to trigger resistance. Both the U.S. National Institutes for Health (NIH) and Department of Defense (DOD) has executed contracts with Aequor to provide the pre-clinical trials needed for these new drug candidates to enter the regulatory approval pipelines. Aequor seeks partners/licensees to commercialize its specialty chemicals and support to evaluate the optimal methods to scale-up of several new structural analogs via activity-guided fractionation and/or biosynthesis in order to initiate the NIH and DOD pre-clinical trials.Keywords: biofilm, potentiation, prevention, removal
Procedia PDF Downloads 992567 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms
Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat
Abstract:
In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization
Procedia PDF Downloads 1182566 Performance Evaluation and Kinetics of Artocarpus heterophyllus Seed for the Purification of Paint Industrial Wastewater by Coagulation-Flocculation Process
Authors: Ifeoma Maryjane Iloamaeke, Kelvin Obazie, Mmesoma Offornze, Chiamaka Marysilvia Ifeaghalu, Cecilia Aduaka, Ugomma Chibuzo Onyeije, Claudine Ifunanaya Ogu, Ngozi Anastesia Okonkwo
Abstract:
This work investigated the effects of pH, settling time, and coagulant dosages on the removal of color, turbidity, and heavy metals from paint industrial wastewater using the seed of Artocarpus heterophyllus (AH) by the coagulation-flocculation process. The paint effluent was physicochemically characterized, while AH coagulant was instrumentally characterized by Scanning Electron Microscope (SEM), Fourier Transform Infrared (FTIR), and X-ray diffraction (XRD). A Jar test experiment was used for the coagulation-flocculation process. The result showed that paint effluent was polluted with color, turbidity (36000 NTU), mercury (1.392 mg/L), lead (0.252 mg/L), arsenic (1.236 mg/L), TSS (63.40mg/L), and COD (121.70 mg/L). The maximum color removal efficiency was 94.33% at the dosage of 0.2 g/L, pH 2 at a constant time of 50 mins, and 74.67% at constant pH 2, coagulant dosage of 0.2 g/L and 50 mins. The highest turbidity removal efficiency was 99.94% at 0.2 g/L and 50 mins at constant pH 2 and 96.66% at pH 2 and 0.2 g/L at constant time of 50 mins. The mercury removal efficiency of 99.29% was achieved at the optimal condition of 0.8 g/L coagulant dosage, pH 8, and constant time of 50 mins and 99.57% at coagulant dosage of 0.8 g/L, time of 50 mins constant pH 8. The highest lead removal efficiency was 99.76% at a coagulant dosage of 10 g/L, time of 40 mins at constant pH 10, and 96.53% at pH 10, coagulant dosage of 10 g/L and constant time of 40 mins. For arsenic, the removal efficiency is 75.24 % at 0.8 g/L coagulant dosage, time of 40 mins, and constant pH of 8. XRD imaging before treatment showed that Artocarpus heterophyllus coagulant was crystalline and changed to amorphous after treatment. The SEM and FTIR results of the AH coagulant and sludge suggested there were changes in the surface morphology and functional groups before and after treatment. The reaction kinetics were modeled best in the second order.Keywords: Artocarpus heterophyllus, coagulation-flocculation, coagulant dosages, setting time, paint effluent
Procedia PDF Downloads 952565 Comprehensive Risk Analysis of Decommissioning Activities with Multifaceted Hazard Factors
Authors: Hyeon-Kyo Lim, Hyunjung Kim, Kune-Woo Lee
Abstract:
Decommissioning process of nuclear facilities can be said to consist of a sequence of problem solving activities, partly because there may exist working environments contaminated by radiological exposure, and partly because there may also exist industrial hazards such as fire, explosions, toxic materials, and electrical and physical hazards. As for an individual hazard factor, risk assessment techniques are getting known to industrial workers with advance of safety technology, but the way how to integrate those results is not. Furthermore, there are few workers who experienced decommissioning operations a lot in the past. Therefore, not a few countries in the world have been trying to develop appropriate counter techniques in order to guarantee safety and efficiency of the process. In spite of that, there still exists neither domestic nor international standard since nuclear facilities are too diverse and unique. In the consequence, it is quite inevitable to imagine and assess the whole risk in the situation anticipated one by one. This paper aimed to find out an appropriate technique to integrate individual risk assessment results from the viewpoint of experts. Thus, on one hand the whole risk assessment activity for decommissioning operations was modeled as a sequence of individual risk assessment steps, and on the other, a hierarchical risk structure was developed. Then, risk assessment procedure that can elicit individual hazard factors one by one were introduced with reference to the standard operation procedure (SOP) and hierarchical task analysis (HTA). With an assumption of quantification and normalization of individual risks, a technique to estimate relative weight factors was tried by using the conventional Analytic Hierarchical Process (AHP) and its result was reviewed with reference to judgment of experts. Besides, taking the ambiguity of human judgment into consideration, debates based upon fuzzy inference was added with a mathematical case study.Keywords: decommissioning, risk assessment, analytic hierarchical process (AHP), fuzzy inference
Procedia PDF Downloads 4242564 Mapping the Early History of Common Law Education in England, 1292-1500
Authors: Malcolm Richardson, Gabriele Richardson
Abstract:
This paper illustrates how historical problems can be studied successfully using GIS even in cases in which data, in the modern sense, is fragmentary. The overall problem under investigation is how early (1300-1500) English schools of Common Law moved from apprenticeship training in random individual London inns run in part by clerks of the royal chancery to become what is widely called 'the Third University of England,' a recognized system of independent but connected legal inns. This paper focuses on the preparatory legal inns, called the Inns of Chancery, rather than the senior (and still existing) Inns of Court. The immediate problem studied in this paper is how the junior legal inns were organized, staffed, and located from 1292 to about 1500, and what maps tell us about the role of the chancery clerks as managers of legal inns. The authors first uncovered the names of all chancery clerks of the period, most of them unrecorded in histories, from archival sources in the National Archives, Kew. Then they matched the names with London property leases. Using ArcGIS, the legal inns and their owners were plotted on a series of maps covering the period 1292 to 1500. The results show a distinct pattern of ownership of the legal inns and suggest a narrative that would help explain why the Inns of Chancery became serious centers of learning during the fifteenth century. In brief, lower-ranking chancery clerks, always looking for sources of income, discovered by 1370 that legal inns could be a source of income. Since chancery clerks were intimately involved with writs and other legal forms, and since the chancery itself had a long-standing training system, these clerks opened their own legal inns to train fledgling lawyers, estate managers, and scriveners. The maps clearly show growth patterns of ownership by the chancery clerks for both legal inns and other London properties in the areas of Holborn and The Strand between 1450 and 1417. However, the maps also show that a royal ordinance of 1417 forbidding chancery clerks to live with lawyers, law students, and other non-chancery personnel had an immediate effect, and properties in that area of London leased by chancery clerks simply stop after 1417. The long-term importance of the patterns shown in the maps is that while the presence of chancery clerks in the legal inns likely created a more coherent education system, their removal forced the legal profession, suddenly without a hostelry managerial class, to professionalize the inns and legal education themselves. Given the number and social status of members of the legal inns, the effect on English education was to free legal education from the limits of chancery clerk education (the clerks were not practicing common lawyers) and to enable it to become broader in theory and practice, in fact, a kind of 'finishing school' for the governing (if not noble) class.Keywords: GIS, law, London, education
Procedia PDF Downloads 1742563 Newspaper Headlines as Tool for Political Propaganda in Nigeria: Trend Analysis of Implications on Four Presidential Elections
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of the media in political discourse cannot be overemphasized as they form an important part of societal development. The media institution is considered the fourth estate of the realm because it serves as a check and balance to the arms of government (Executive, Legislature and Judiciary) especially in a democratic setup, and makes public office holders accountable to the people. They scrutinize the political candidates and conduct a holistic analysis of the achievement of the government in order to make the people’s representative accountable to the electorates. The media in Nigeria play a seminal role in shaping how people vote during elections. Newspaper headlines are catchy phrases that easily capture the attention of the audience and call them (audience) to action. Research conducted on newspaper headlines looks at the linguistic aspect and how the tenses used has a resultant effect on peoples’ attitude and behaviour. Communication scholars have also conducted studies that interrogate whether newspaper headlines influence peoples' voting patterns and decisions. Propaganda and negative stories about political opponents are stapling features in electioneering campaigns. Nigerian newspaper readers have the characteristic of scanning newspaper headlines. And the question is whether politicians effectively have played into this tendency to brand opponents negatively, based on half-truths and inadequate information. This study illustrates major trends in the Nigerian political landscape looking at the past four presidential elections and frames the progress of the research in the extant body of political propaganda research in Africa. The study will use the quantitative content analysis of newspaper headlines from 2007 to 2019 to be able to ascertain whether newspaper headlines had any effect on the election results of the presidential elections during these years. This will be supplemented by Key Informant Interviews of political scientists or experts to draw further inferences from the quantitative data. Drawing on newspaper headlines of selected newspapers in Nigeria that have a political propaganda angle for the presidential elections, the analysis will correspond to and complements extant descriptions of how the field of political propaganda has been developed in Nigeria, providing evidence of four presidential elections that have shaped Nigerian politics. Understanding the development of the behavioural change of the electorates provide useful context for trend analysis in political propaganda communication. The findings will contribute to how newspaper headlines are used partly or wholly to decide the outcome of presidential elections in Nigeria.Keywords: newspaper headlines, political propaganda, presidential elections, trend analysis
Procedia PDF Downloads 2352562 A Philosophical Investigation into African Conceptions of Personhood in the Fourth Industrial Revolution
Authors: Sanelisiwe Ndlovu
Abstract:
Cities have become testbeds for automation and experimenting with artificial intelligence (AI) in managing urban services and public spaces. Smart Cities and AI systems are changing most human experiences from health and education to personal relations. For instance, in healthcare, social robots are being implemented as tools to assist patients. Similarly, in education, social robots are being used as tutors or co-learners to promote cognitive and affective outcomes. With that general picture in mind, one can now ask a further question about Smart Cities and artificial agents and their moral standing in the African context of personhood. There has been a wealth of literature on the topic of personhood; however, there is an absence of literature on African personhood in highly automated environments. Personhood in African philosophy is defined by the role one can and should play in the community. However, in today’s technologically advanced world, a risk is that machines become more capable of accomplishing tasks that humans would otherwise do. Further, on many African communitarian accounts, personhood and moral standing are associated with active relationality with the community. However, in the Smart City, human closeness is gradually diminishing. For instance, humans already do engage and identify with robotic entities, sometimes even romantically. The primary aim of this study is to investigate how African conceptions of personhood and community interact in a highly automated environment such as Smart Cities. Accordingly, this study lies in presenting a rarely discussed African perspective that emphasizes the necessity and the importance of relationality in handling Smart Cities and AI ethically. Thus, the proposed approach can be seen as the sub-Saharan African contribution to personhood and the growing AI debates, which takes the reality of the interconnectedness of society seriously. And it will also open up new opportunities to tackle old problems and use existing resources to confront new problems in the Fourth Industrial Revolution.Keywords: smart city, artificial intelligence, personhood, community
Procedia PDF Downloads 2022561 Impact of Collieries on Groundwater in Damodar River Basin
Authors: Rajkumar Ghosh
Abstract:
The industrialization of coal mining and related activities has a significant impact on groundwater in the surrounding areas of the Damodar River. The Damodar River basin, located in eastern India, is known as the "Ruhr of India" due to its abundant coal reserves and extensive coal mining and industrial operations. One of the major consequences of collieries on groundwater is the contamination of water sources. Coal mining activities often involve the excavation and extraction of coal through underground or open-pit mining methods. These processes can release various pollutants and chemicals into the groundwater, including heavy metals, acid mine drainage, and other toxic substances. As a result, the quality of groundwater in the Damodar River region has deteriorated, making it unsuitable for drinking, irrigation, and other purposes. The high concentration of heavy metals, such as arsenic, lead, and mercury, in the groundwater has posed severe health risks to the local population. Prolonged exposure to contaminated water can lead to various health problems, including skin diseases, respiratory issues, and even long-term ailments like cancer. The contamination has also affected the aquatic ecosystem, harming fish populations and other organisms dependent on the river's water. Moreover, the excessive extraction of groundwater for industrial processes, including coal washing and cooling systems, has resulted in a decline in the water table and depletion of aquifers. This has led to water scarcity and reduced availability of water for agricultural activities, impacting the livelihoods of farmers in the region. Efforts have been made to mitigate these issues through the implementation of regulations and improved industrial practices. However, the historical legacy of coal industrialization continues to impact the groundwater in the Damodar River area. Remediation measures, such as the installation of water treatment plants and the promotion of sustainable mining practices, are essential to restore the quality of groundwater and ensure the well-being of the affected communities. In conclusion, the coal industrialization in the Damodar River surrounding has had a detrimental impact on groundwater. This research focuses on soil subsidence induced by the over-exploitation of ground water for dewatering open pit coal mines. Soil degradation happens in arid and semi-arid regions as a result of land subsidence in coal mining region, which reduces soil fertility. Depletion of aquifers, contamination, and water scarcity are some of the key challenges resulting from these activities. It is crucial to prioritize sustainable mining practices, environmental conservation, and the provision of clean drinking water to mitigate the long-lasting effects of collieries on the groundwater resources in the region.Keywords: coal mining, groundwater, soil subsidence, water table, damodar river
Procedia PDF Downloads 802560 Assistive Kitchenware Design for Hemiparetics
Authors: Daniel F. Madrinan-Chiquito
Abstract:
Hemiparesis affects about eight out of ten stroke survivors, causing weakness or the inability to move one side of the body. One-sided weakness can affect arms, hands, legs, or facial muscles. People with one-sided weakness may have trouble performing everyday activities such as eating, cooking, dressing, and using the bathroom. Rehabilitation treatments, exercises at home, and assistive devices can help with mobility and recovery. Historically, such treatments and devices were developed within the fields of medicine and biomedical engineering. However, innovators outside of the traditional medical device community, such as Industrial Designers, have recently brought their knowledge and expertise to assistive technologies. Primary and secondary research was done in three parts. The primary research collected data by talking with several occupational therapists currently attending to stroke patients, and surveys were given to patients with hemiparesis and hemiplegia. The secondary research collected data through observation and testing of products currently marketed for single-handed people. Modern kitchenware available in the market for people with an acquired brain injury has deficiencies in both aesthetic and functional values. Object design for people with hemiparesis or hemiplegia has not been meaningfully explored. Most cookware is designed for use with two hands and possesses little room for adaptation to the needs of one-handed individuals. This project focuses on the design and development of two kitchenware devices. These devices assist hemiparetics with different cooking-related tasks such as holding, grasping, cutting, slicing, chopping, grating, and other essential activities. These intentionally designed objects will improve the quality of life of hemiparetics by enabling greater independence and providing an enhanced ability for precision tasks in a cooking environment.Keywords: assistive technologies, hemiparetics, industrial design, kitchenware
Procedia PDF Downloads 1052559 Implementation of Research Papers and Industry Related Experiments by Undergraduate Students in the Field of Automation
Authors: Veena N. Hegde, S. R. Desai
Abstract:
Motivating a heterogeneous group of students towards engagement in research related activities is a challenging task in engineering education. An effort is being made at the Department of Electronics and Instrumentation Engineering, where two courses are taken up on a pilot basis to kindle research interests in students at the undergraduate level. The courses, namely algorithm and system design (ASD) and automation in process control (APC), are selected for experimentation purposes. The task is being accomplished by providing scope for implementation of research papers and proposing solutions for the current industrial problems by the student teams. The course instructors have proposed an alternative assessment tool to evaluate the undergraduate students that involve activities beyond the curriculum. The method was tested for the aforementioned two courses in a particular academic year, and as per the observations, there is a considerable improvement in the number of student engagement towards research in the subsequent years of their undergraduate course. The student groups from the third-year engineering were made to read, implement the research papers, and they were also instructed to develop simulation modules for certain processes aiming towards automation. The target audience being students, were common for both the courses and the students' strength was 30. Around 50% of successful students were given the continued tasks in the subsequent two semesters, and out of 15 students who continued from sixth semesters were able to follow the research methodology well in the seventh and eighth semesters. Further, around 30% of the students out of 15 ended up carrying out project work with a research component involved and were successful in producing four conference papers. The methodology adopted is justified using a sample data set, and the outcomes are highlighted. The quantitative and qualitative results obtained through this study prove that such practices will enhance learning experiences substantially at the undergraduate level.Keywords: industrial problems, learning experiences, research related activities, student engagement
Procedia PDF Downloads 1652558 Dairy Wastewater Treatment by Electrochemical and Catalytic Method
Authors: Basanti Ekka, Talis Juhna
Abstract:
Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide
Procedia PDF Downloads 1442557 Re-Orienting Fashion: Fashionable Modern Muslim Women beyond Western Modernity
Authors: Amany Abdelrazek
Abstract:
Fashion is considered the main feature of modern and postmodern capitalist and consumerist society. Consumer historians maintain that fashion, namely, a sector of people embracing a prevailing clothing style for a short period, started during the Middle Ages but gained popularity later. It symbolised the transition from a medieval society with its solid fixed religious values into a modern society with its secular consumer dynamic culture. Renaissance society was a modern secular society concerning its preoccupation with daily life and changing circumstances. Yet, the late 18th-century industrial revolution revolutionised thought and ideology in Europe. The Industrial Revolution reinforced the Western belief in rationality and strengthened the position of science. In such a rational Western society, modernity, with its new ideas, came to challenge the whole idea of old fixed norms, reflecting the modern secular, rational culture and renouncing the medieval pious consumer. In modern society, supported by the industrial revolution and mass production, fashion encouraged broader sectors of society to integrate into fashion reserved for the aristocracy and royal courts. Moreover, the fashion project emphasizes the human body and its beauty, contradicting Judeo-Christian culture, which tends to abhor and criticize interest in sensuality and hedonism. In mainstream Western discourse, fashionable dress differentiates between emancipated stylish consumerist secular modern female and the assumed oppressed traditional modest religious female. Opposing this discourse, I look at the controversy over what has been called "Islamic fashion" that started during the 1980s and continued to gain popularity in contemporary Egyptian society. I discuss the challenges of being a fashionable and Muslim practicing female in light of two prominent models for female "Islamic fashion" in postcolonial Egypt; Jasmin Mohshen, the first hijabi model in Egypt and Manal Rostom, the first Muslim woman to represent the Nike campaign in the Middle East. The research employs fashion and postcolonial theories to rethink current Muslim women's position on women's emancipation, Western modernity and practising faith in postcolonial Egypt. The paper argues that Muslim women's current innovative and fashionable dress can work as a counter-discourse to the Orientalist and exclusive representation of non-Western Muslim culture as an inherently inert timeless culture. Furthermore, "Islamic" fashionable dress as an aesthetic medium for expressing ideas and convictions in contemporary Egypt interrogates the claim of universal secular modernity and Western fashion theorists' reluctance to consider Islamic fashion as fashion.Keywords: fashion, muslim women, modernity, secularism
Procedia PDF Downloads 1292556 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction
Authors: Ben Haines, Li Bai
Abstract:
Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency
Procedia PDF Downloads 2032555 Simulation-Based Validation of Safe Human-Robot-Collaboration
Authors: Titanilla Komenda
Abstract:
Human-machine-collaboration defines a direct interaction between humans and machines to fulfil specific tasks. Those so-called collaborative machines are used without fencing and interact with humans in predefined workspaces. Even though, human-machine-collaboration enables a flexible adaption to variable degrees of freedom, industrial applications are rarely found. The reasons for this are not technical progress but rather limitations in planning processes ensuring safety for operators. Until now, humans and machines were mainly considered separately in the planning process, focusing on ergonomics and system performance respectively. Within human-machine-collaboration, those aspects must not be seen in isolation from each other but rather need to be analysed in interaction. Furthermore, a simulation model is needed that can validate the system performance and ensure the safety for the operator at any given time. Following on from this, a holistic simulation model is presented, enabling a simulative representation of collaborative tasks – including both, humans and machines. The presented model does not only include a geometry and a motion model of interacting humans and machines but also a numerical behaviour model of humans as well as a Boole’s probabilistic sensor model. With this, error scenarios can be simulated by validating system behaviour in unplanned situations. As these models can be defined on the basis of Failure Mode and Effects Analysis as well as probabilities of errors, the implementation in a collaborative model is discussed and evaluated regarding limitations and simulation times. The functionality of the model is shown on industrial applications by comparing simulation results with video data. The analysis shows the impact of considering human factors in the planning process in contrast to only meeting system performance. In this sense, an optimisation function is presented that meets the trade-off between human and machine factors and aids in a successful and safe realisation of collaborative scenarios.Keywords: human-machine-system, human-robot-collaboration, safety, simulation
Procedia PDF Downloads 361