Search results for: interdisciplinary applications
5530 Effect of Degree of Phosphorylation on Electrospinning and In vitro Cell Behavior of Phosphorylated Polymers as Biomimetic Materials for Tissue Engineering Applications
Authors: Pallab Datta, Jyotirmoy Chatterjee, Santanu Dhara
Abstract:
Over the past few years, phosphorous containing polymers have received widespread attention for applications such as high performance optical fibers, flame retardant materials, drug delivery and tissue engineering. Being pentavalent, phosphorous can exist in different chemical environments in these polymers which increase their versatility. In human biochemistry, phosphorous based compounds exert their functions both in soluble and insoluble form occurring as inorganic or as organophosphorous compounds. Specifically in case of biomacromolecules, phosphates are critical for functions of DNA, ATP, phosphoproteins, phospholipids, phosphoglycans and several coenzymes. Inspired by the role of phosphorous in functional biomacromolecules, design and synthesis of biomimetic materials are thus carried out by several authors to study macromolecular function or as substitutes in clinical tissue regeneration conditions. In addition, many regulatory signals of the body are controlled by phoshphorylation of key proteins present either in form of growth factors or matrix-bound scaffold proteins. This inspires works on synthesis of phospho-peptidomimetic amino acids for understanding key signaling pathways and this is extended to obtain molecules with potentially useful biological properties. Apart from above applications, phosphate groups bound to polymer backbones have also been demonstrated to improve function of osteoblast cells and augment performance of bone grafts. Despite the advantages of phosphate grafting, however, there is limited understanding on effect of degree of phosphorylation on macromolecular physicochemical and/or biological properties. Such investigations are necessary to effectively translate knowledge of macromolecular biochemistry into relevant clinical products since they directly influence processability of these polymers into suitable scaffold structures and control subsequent biological response. Amongst various techniques for fabrication of biomimetic scaffolds, nanofibrous scaffolds fabricated by electrospinning technique offer some special advantages in resembling the attributes of natural extracellular matrix. Understanding changes in physico-chemical properties of polymers as function of phosphorylation is therefore going to be crucial in development of nanofiber scaffolds based on phosphorylated polymers. The aim of the present work is to investigate the effect of phosphorous grafting on the electrospinning behavior of polymers with aim to obtain biomaterials for bone regeneration applications. For this purpose, phosphorylated derivatives of two polymers of widely different electrospinning behaviors were selected as starting materials. Poly(vinyl alcohol) is a conveniently electrospinnable polymer at different conditions and concentrations. On the other hand, electrospinning of chitosan backbone based polymers have been viewed as a critical challenge. The phosphorylated derivatives of these polymers were synthesized, characterized and electrospinning behavior of various solutions containing these derivatives was compared with electrospinning of pure poly (vinyl alcohol). In PVA, phosphorylation adversely impacted electrospinnability while in NMPC, higher phosphate content widened concentration range for nanofiber formation. Culture of MG-63 cells on electrospun nanofibers, revealed that degree of phosphate modification of a polymer significantly improves cell adhesion or osteoblast function of cultured cells. It is concluded that improvement of cell response parameters of nanofiber scaffolds can be attained as a function of controlled degree of phosphate grafting in polymeric biomaterials with implications for bone tissue engineering applications.Keywords: bone regeneration, chitosan, electrospinning, phosphorylation
Procedia PDF Downloads 2235529 Digital Employment of Disabled People: Empirical Study from Shanghai
Abstract:
Across the globe, ICTs are influencing employment both as an industry that creates jobs and as a tool that empowers disabled people to access new forms of work, in innovative and more flexible ways. The advancements in ICT and the number of apps and solutions that support persons with physical, cognitive and intellectual disabilities challenge traditional biased notions and offer a pathway out of traditional sheltered workshops. As the global leader in digital technology innovation, China is arguably a leader in the use of digital technology as a 'lever' in ending the economic and social marginalization of the disabled. This study investigates factors that influence adoption and use of employment-oriented ICT applications among disabled people in China and seeks to integrate three theoretical approaches: the technology acceptance model (TAM), the uses and gratifications (U&G) approach, and the social model of disability. To that end, the study used data from self-reported survey of 214 disabled adults who have been involved in two top-down 'Internet + employment' programs promoted by local disabled persons’ federation in Shanghai. A structural equation model employed in the study demonstrates that the use of employment-oriented ICT applications is affected by demographic factors of gender, categories of disability, education and marital status. The organizational support of local social organizations demonstrates significate effects on the motivations of disabled people. Results from the focus group interviews particularly suggested that to maximize the positive impact of ICTs on employment, there is significant need to build stakeholder capacity on how ICTs could benefits persons with disabilities.Keywords: disabled people, ICTs, technology acceptance model, uses and gratifications, the social model of disability
Procedia PDF Downloads 1095528 Data Science in Military Decision-Making: A Semi-Systematic Literature Review
Authors: H. W. Meerveld, R. H. A. Lindelauf
Abstract:
In contemporary warfare, data science is crucial for the military in achieving information superiority. Yet, to the authors’ knowledge, no extensive literature survey on data science in military decision-making has been conducted so far. In this study, 156 peer-reviewed articles were analysed through an integrative, semi-systematic literature review to gain an overview of the topic. The study examined to what extent literature is focussed on the opportunities or risks of data science in military decision-making, differentiated per level of war (i.e. strategic, operational, and tactical level). A relatively large focus on the risks of data science was observed in social science literature, implying that political and military policymakers are disproportionally influenced by a pessimistic view on the application of data science in the military domain. The perceived risks of data science are, however, hardly addressed in formal science literature. This means that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms. Cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. Considering the levels of war, relatively low attention for the operational level compared to the other two levels was observed, suggesting a research gap with reference to military operational data science. Opportunities for military data science mostly arise at the tactical level. On the contrary, studies examining strategic issues mostly emphasise the risks of military data science. Consequently, domain-specific requirements for military strategic data science applications are hardly expressed. Lacking such applications may ultimately lead to a suboptimal strategic decision in today’s warfare.Keywords: data science, decision-making, information superiority, literature review, military
Procedia PDF Downloads 1705527 Surface Nanostructure Developed by Ultrasonic Shot Peening and Its Effect on Low Cycle Fatigue Life of the IN718 Superalloy
Authors: Sanjeev Kumar, Vikas Kumar
Abstract:
Inconel 718 (IN718) is a high strength nickel-based superalloy designed for high-temperature applications up to 650 °C. It is widely used in gas turbines of jet engines and related aerospace applications because of its good mechanical properties and structural stability at elevated temperatures. Because of good performance ratio and excellent process capability, this alloy has been used predominantly for aeronautic engine components like compressor disc and compressor blade. The main precipitates that contribute to high-temperature strength of IN718 are γʹ Ni₃(Al, Ti) and mainly γʹʹ (Ni₃ Nb). Various processes have been used for modification of the surface of components, such as Laser Shock Peening (LSP), Conventional Shot Peening (SP) and Ultrasonic Shot Peening (USP) to induce compressive residual stress (CRS) and development of fine-grained structure in the surface region. Surface nanostructure by ultrasonic shot peening is a novel methodology of surface modification to improve the overall performance of structural components. Surface nanostructure was developed on the peak aged IN718 superalloy using USP and its effect was studied on low cycle fatigue (LCF) life. Nanostructure of ~ 49 to 73 nm was developed in the surface region of the alloy by USP. The gage section of LCF samples was USPed for 5 minutes at a constant frequency of 20 kHz using StressVoyager to modify the surface. Strain controlled cyclic tests were performed for non-USPed and USPed samples at ±Δεt/2 from ±0.50% to ±1.0% at strain rate (ė) 1×10⁻³ s⁻¹ under reversal loading (R=‒1) at room temperature. The fatigue life of the USPed specimens was found to be more than that of the non-USPed ones. LCF life of the USPed specimen at Δεt/2=±0.50% was enhanced by more than twice of the non-USPed specimen.Keywords: IN718 superalloy, nanostructure, USP, LCF life
Procedia PDF Downloads 1155526 pH-Responsive Carrier Based on Polymer Particle
Authors: Florin G. Borcan, Ramona C. Albulescu, Adela Chirita-Emandi
Abstract:
pH-responsive drug delivery systems are gaining more importance because these systems deliver the drug at a specific time in regards to pathophysiological necessity, resulting in improved patient therapeutic efficacy and compliance. Polyurethane materials are well-known for industrial applications (elastomers and foams used in different insulations and automotive), but they are versatile biocompatible materials with many applications in medicine, as artificial skin for the premature neonate, membrane in the hybrid artificial pancreas, prosthetic heart valves, etc. This study aimed to obtain the physico-chemical characterization of a drug delivery system based on polyurethane microparticles. The synthesis is based on a polyaddition reaction between an aqueous phase (mixture of polyethylene-glycol M=200, 1,4-butanediol and Tween® 20) and an organic phase (lysin-diisocyanate in acetone) combined with simultaneous emulsification. Different active agents (omeprazole, amoxicillin, metoclopramide) were used to verify the release profile of the macromolecular particles in different pH mediums. Zetasizer measurements were performed using an instrument based on two modules: a Vasco size analyzer and a Wallis Zeta potential analyzer (Cordouan Technol., France) in samples that were kept in various solutions with different pH and the maximum absorbance in UV-Vis spectra were collected on a UVi Line 9,400 Spectrophotometer (SI Analytics, Germany). The results of this investigation have revealed that these particles are proper for a prolonged release in gastric medium where they can assure an almost constant concentration of the active agents for 1-2 weeks, while they can be disassembled faster in a medium with neutral pHs, such as the intestinal fluid.Keywords: lysin-diisocyanate, nanostructures, polyurethane, Zetasizer
Procedia PDF Downloads 1875525 Architectural Design Strategies and Visual Perception of Contemporary Spatial Design
Authors: Nora Geczy
Abstract:
In today’s architectural practice, during the process of designing public, educational, healthcare and cultural space, human-centered architectural designs helping spatial orientation, safe space usage and the appropriate spatial sequence of actions are gaining increasing importance. Related to the methodology of designing public buildings, several scientific experiments in spatial recognition, spatial analysis and spatial psychology with regard to the components of space producing mental and physiological effects have been going on at the Department of Architectural Design and the Interdisciplinary Student Workshop (IDM) at the Széchenyi István University, Győr since 2013. Defining the creation of preventive, anticipated spatial design and the architectural tools of spatial comfort of public buildings and their practical usability are in the limelight of our research. In the experiments applying eye-tracking cameras, we studied the way public spaces are used, especially concentrating on the characteristics of spatial behaviour, orientation, recognition, the sequence of actions, and space usage. Along with the role of mental maps, human perception, and interaction problems in public spaces (at railway stations, galleries, and educational institutions), we analyzed the spatial situations influencing psychological and ergonomic factors. We also analyzed the eye movements of the experimental subjects in dynamic situations, in spatial procession, using stairs and corridors. We monitored both the consequences and the distorting effects of the ocular dominance of the right eye on spatial orientation; we analyzed the gender-based differences of women and men’s orientation, stress-inducing spaces, spaces affecting concentration and the spatial situation influencing territorial behaviour. Based on these observations, we collected the components of creating public interior spaces, which -according to our theory- contribute to the optimal usability of public spaces. We summed up our research in criteria for design, including 10 points. Our further goals are testing design principles needed for optimizing orientation and space usage, their discussion, refinement, and practical usage.Keywords: architecture, eye-tracking, human-centered spatial design, public interior spaces, visual perception
Procedia PDF Downloads 1125524 Increasing Holism: Qualitative, Cross-Dimensional Study of Contemporary Innovation Processes
Authors: Sampo Tukiainen, Jukka Mattila, Niina Erkama, Erkki Ormala
Abstract:
During the past decade, calls for more holistic and integrative organizational innovation research have been increasingly voiced. On the one hand, from the theoretical perspective, the reason for this has been the tendency in contemporary innovation studies to focus on disciplinary subfields, often leading to challenges in integrating theories in meaningful ways. For example, we find that during the past three decades the innovation research has evolved into an academic field consisting of several independent research streams, such as studies on organizational learning, project management, and top management teams, to name but a few. The innovation research has also proliferated according to different dimensions of innovation, such as sources, drivers, forms, and the nature of innovation. On the other hand, from the practical perspective the rationale has been the need to develop understanding of the solving of complex, interdisciplinary issues and problems in contemporary and future societies and organizations. Therefore, for advancing theorizing, as well as the practical applicability of organizational innovation research, we acknowledge the need for more integrative and holistic perspectives and approaches. We contribute to addressing this challenge by developing a ‘box transcendent’ perspective to examine interlinkages in and across four key dimensions of organizational innovation processes, which traditionally have been studied in separate research streams. Building on an in-depth, qualitative analysis of 123 interviews of CTOs (or equivalent) and CEOs in top innovative Finnish companies as well as three in-depth case studies, both as part of an EU-level interview study of more than 700 companies, we specify interlinkages in and between i) strategic management, ii) innovation management, iii) implementation and organization, and iv) commercialization, in innovation processes. We contribute to the existing innovation research in multiple ways. Firstly, we develop a cross-dimensional, ‘box transcendent’ conceptual model at the level of organizational innovation process. Secondly, this modeling enables us to extend existing theorizing by allowing us to distinguish specific cross-dimensional innovation ‘profiles’ in two different company categories: large multinational corporations and SMEs. Finally, from the more practical perspective, we consider the implications of such innovation ‘profiles’ for the societal and institutional, policy-making development.Keywords: holistic research, innovation management, innovation studies, organizational innovation
Procedia PDF Downloads 3285523 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 4685522 Providing Support for Minority LGBTQ Students: Developing a Queer Studies Course
Authors: Karen Butler
Abstract:
The LGBTQ youth of color face stigma related to both race and gender identity. Effectively dealing with racial/ethnic discrimination requires strong connections to family and one’s racial/ethnic group. However, LGBTQ youth of color seldom receive support from family, peer groups or church groups. Moreover, ethnic communities often perceive LGBTQ identities as a rejection of ethnic heritage. Thus, stigma places these young people at greater risk for substance use, violence, risky sexual behaviors, suicide, and homelessness. Offering a Queer Studies (QS) class is one way to facilitate a safer and more inclusive environment for LGBTQ students, faculty and staff. The discipline of Queer Studies encompasses theories and thinkers from numerous fields: cultural studies, gay and lesbian studies, race studies, women's studies, media, postmodernism, post-colonialism, psychoanalysis and more. We began our course development by researching existing programs and classes. Several course syllabi were examined and course materials such as readings, videos, and guest speakers were assessed for possible inclusion. We also employed informal survey methods with students and faculty in order to gauge interest in the course. We then developed a sample course syllabus and began the process of new course approval. Feedback thus far indicates that students of various sexual orientations and gender identities are interested in the course and understand the need to offer it; faculty in Psychology, Social Work, and Interdisciplinary Studies are interested in cross-listing the course; library staff is willing to assist with course material acquisition, and the administration is supportive. The purpose of this session is to 1) explore the various health and wellness issues facing LGBTQ students of color and 2) share our experience of developing a QS course in health education in order to address these needs. This process, from initial recognition of the need to a course offering, will be described and discussed in the hopes that participants will increase their awareness of the issues. A QS course would be an appropriate requirement for any number of majors as well as an elective for any major.Keywords: black colleges, health education, LGBTQ, queer studies
Procedia PDF Downloads 1445521 Cytotoxic and Biocompatible Evaluation of Silica Coated Silver Nanoparticle Against Nih-3t3 Cells
Authors: Chen-En Lin, Lih-Rou Rau, Jiunn-Woei Liaw, Shiao-Wen Tsai
Abstract:
The unique optical properties of plasmon resonance metallic particles have attracted considerable applications in the fields of physics, chemistry and biology. Metal-Enhanced Fluorescence (MEF) effect is one of the useful applications. MEF effect stated that fluorescence intensity can be quenched or be enhanced depending on the distance between fluorophores and the metal nanoparticles. Silver nanoparticles have used widely in antibacterial studies. However, the major limitation for silver nanoparticles (AgNPs) in biomedical application is well-known cytotoxicity on cells. There were numerous literatures have been devoted to overcome the disadvantage. The aim of the study is to evaluate the cytotoxicity and biocompatibility of silica coated AgNPs against NIH-3T3 cells. The results were shown that NIH-3T3 cells started to detach, shrink, become rounded and finally be irregular in shape after 24 h of exposure at 10 µg/ml AgNPs. Besides, compared with untreated cells, the cell viability significantly decreased to 60% and 40% which were exposed to 10 µg/ml and 20 µg/ml AgNPs respectively. The result was consistent with previously reported findings that AgNPs induced cytotoxicity was concentration dependent. However, the morphology and cell viability of cells appeared similar to the control group when exposed to 20 µg/ml of silica coated AgNPs. We further utilized the dark-field hyperspectral imaging system to analysis the optical properties of the intracellular nanoparticles. The image displayed that the red shift of the surface plasmonic resonances band of the enclosed AgNPs further confirms the agglomerate of the AgNPs rather than their distribution in cytoplasm. In conclusion, the study demonstrated the silica coated of AgNPs showed well biocompatibility and significant lower cytotoxicity compared with bare AgNPs.Keywords: silver nanoparticles, silica, cell viability, morphology
Procedia PDF Downloads 3965520 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System
Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes
Abstract:
The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models
Procedia PDF Downloads 835519 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images
Authors: Mehrnoosh Omati, Mahmod Reza Sahebi
Abstract:
The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images
Procedia PDF Downloads 2195518 The Basin Management Methodology for Integrated Water Resources Management and Development
Authors: Julio Jesus Salazar, Max Jesus De Lama
Abstract:
The challenges of water management are aggravated by global change, which implies high complexity and associated uncertainty; water management is difficult because water networks cross domains (natural, societal, and political), scales (space, time, jurisdictional, institutional, knowledge, etc.) and levels (area: patches to global; knowledge: a specific case to generalized principles). In this context, we need to apply natural and non-natural measures to manage water and soil. The Basin Management Methodology considers multifunctional measures of natural water retention and erosion control and soil formation to protect water resources and address the challenges related to the recovery or conservation of the ecosystem, as well as natural characteristics of water bodies, to improve the quantitative status of water bodies and reduce vulnerability to floods and droughts. This method of water management focuses on the positive impacts of the chemical and ecological status of water bodies, restoration of the functioning of the ecosystem and its natural services; thus, contributing to both adaptation and mitigation of climate change. This methodology was applied in 7 interventions in the sub-basin of the Shullcas River in Huancayo-Junín-Peru, obtaining great benefits in the framework of the participation of alliances of actors and integrated planning scenarios. To implement the methodology in the sub-basin of the Shullcas River, a process called Climate Smart Territories (CST) was used; with which the variables were characterized in a highly complex space. The diagnosis was then worked using risk management and adaptation to climate change. Finally, it was concluded with the selection of alternatives and projects of this type. Therefore, the CST approach and process face the challenges of climate change through integrated, systematic, interdisciplinary and collective responses at different scales that fit the needs of ecosystems and their services that are vital to human well-being. This methodology is now replicated at the level of the Mantaro river basin, improving with other initiatives that lead to the model of a resilient basin.Keywords: climate-smart territories, climate change, ecosystem services, natural measures, Climate Smart Territories (CST) approach
Procedia PDF Downloads 1545517 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 1135516 Nanocomposites Based Micro/Nano Electro-Mechanical Systems for Energy Harvesters and Photodetectors
Authors: Radhamanohar Aepuru, R. V. Mangalaraja
Abstract:
Flexible electronic devices have drawn potential interest and provide significant new insights to develop energy conversion and storage devices such as photodetectors and nanogenerators. Recently, self-powered electronic systems have captivated huge attention for next generation MEMS/NEMS devices that can operate independently by generating built-in field without any need of external bias voltage and have wide variety of applications in telecommunication, imaging, environmental and defence sectors. The basic physical process involved in these devices are charge generation, separation, and charge flow across the electrodes. Many inorganic nanostructures have been exploring to fabricate various optoelectronic and electromechanical devices. However, the interaction of nanostructures and their excited charge carrier dynamics, photoinduced charge separation, and fast carrier mobility are yet to be studied. The proposed research is to address one such area and to realize the self-powered electronic devices. In the present work, nanocomposites of inorganic nanostructures based on ZnO, metal halide perovskites; and polyvinylidene fluoride (PVDF) based nanocomposites are realized for photodetectors and nanogenerators. The characterization of the inorganic nanostructures is carried out through steady state optical absorption and luminescence spectroscopies as well as X-ray diffraction and high-resolution transmission electron microscopy (TEM) studies. The detailed carrier dynamics is investigated using various spectroscopic techniques. The developed composite nanostructures exhibit significant optical and electrical properties, which have wide potential applications in various MEMS/NEMS devices such as photodetectors and nanogenerators.Keywords: dielectrics, nanocomposites, nanogenerators, photodetectors
Procedia PDF Downloads 1325515 The Effect of Technology on International Marketing Trading Researches and Analysis
Authors: Omil Nady Mahrous Maximous
Abstract:
The article deals with the use of modern information technologies to achieve pro-ecological marketing goals in company-customer relationships. The purpose of the article is to show the possibilities of implementing modern information technologies. In B2C relationships, marketing departments face challenges stemming from the need to quickly segment customers and the current fragmentation of data across many systems, which significantly hinders the achievement of marketing goals. Thus, Article proposes the use of modern IT solutions in the field of marketing activities of companies, taking into account their environmental goals. As a result, its importance for the economic and social development of the emerging countries has increased. While traditional companies emphasize profit maximization as a core business principle, social enterprises must solve social problems at the expense of profit. This rationale gives social enterprises an edge over traditional businesses by meeting the needs of those at the bottom of the pyramid. This also represents a major challenge for social business, since social business acts on the one hand for the benefit of the public and on the other strives for financial stability. Otherwise, the company is unlikely to be fired from the company. Cultures play a role in business communication and research. Using the example of language in international relations, the article presents the problem of the articulation of research cultures in management and linguistics and of cultures as such. After an overview of current research on language in international relations, this article presents the approach to communication in international economy from a linguistic point of view and tries to explain the problems of communication in business starting from linguistic research. A step towards interdisciplinary research that brings together research in the fields of management and linguistics.Keywords: international marketing, marketing mix, marketing research, small and medium-sized enterprises, strategic marketing, B2B digital marketing strategy, digital marketing, digital marketing maturity model, SWOT analysis consumer behavior, experience, experience marketing, marketing employee organizational performance, internal marketing, internal customer, direct marketing, mobile phones mobile marketing, Sms advertising
Procedia PDF Downloads 485514 Role and Impact of Artificial Intelligence in Sales and Distribution Management
Authors: Kiran Nair, Jincy George, Suhaib Anagreh
Abstract:
Artificial intelligence (AI) in a marketing context is a form of a deterministic tool designed to optimize and enhance marketing tasks, research tools, and techniques. It is on the verge of transforming marketing roles and revolutionize the entire industry. This paper aims to explore the current dissemination of the application of artificial intelligence (AI) in the marketing mix, reviewing the scope and application of AI in various aspects of sales and distribution management. The paper also aims at identifying the areas of the strong impact of AI in factors of sales and distribution management such as distribution channel, purchase automation, customer service, merchandising automation, and shopping experiences. This is a qualitative research paper that aims to examine the impact of AI on sales and distribution management of 30 multinational brands in six different industries, namely: airline; automobile; banking and insurance; education; information technology; retail and telecom. Primary data is collected by means of interviews and questionnaires from a sample of 100 marketing managers that have been selected using convenient sampling method. The data is then analyzed using descriptive statistics, correlation analysis and multiple regression analysis. The study reveals that AI applications are extensively used in sales and distribution management, with a strong impact on various factors such as identifying new distribution channels, automation in merchandising, customer service, and purchase automation as well as sales processes. International brands have already integrated AI extensively in their day-to-day operations for better efficiency and improved market share while others are investing heavily in new AI applications for gaining competitive advantage.Keywords: artificial intelligence, sales and distribution, marketing mix, distribution channel, customer service
Procedia PDF Downloads 1585513 Securing Web Servers by the Intrusion Detection System (IDS)
Authors: Yousef Farhaoui
Abstract:
An IDS is a tool which is used to improve the level of security. We present in this paper different architectures of IDS. We will also discuss measures that define the effectiveness of IDS and the very recent works of standardization and homogenization of IDS. At the end, we propose a new model of IDS called BiIDS (IDS Based on the two principles of detection) for securing web servers and applications by the Intrusion Detection System (IDS).Keywords: intrusion detection, architectures, characteristic, tools, security, web server
Procedia PDF Downloads 4205512 In-Vitro Dextran Synthesis and Characterization of an Intracellular Glucosyltransferase from Leuconostoc Mesenteroides AA1
Authors: Afsheen Aman, Shah Ali Ul Qader
Abstract:
Dextransucrase [EC 2.4.1.5] is a glucosyltransferase that catalysis the biosynthesis of a natural biopolymer called dextran. It can catalyze the transfer of D-glucopyranosyl residues from sucrose to the main chain of dextran. This unique biopolymer has multiple applications in several industries and the key utilization of dextran lies on its molecular weight and the type of branching. Extracellular dextransucrase from Leuconostoc mesenteroides is most extensively studied and characterized. Limited data is available regarding cell-bound or intracellular dextransucrase and on the characterization of dextran produced by in-vitro reaction of intracellular dextransucrase. L. mesenteroides AA1 is reported to produce extracellular dextransucrase that catalyzes biosynthesis of a high molecular weight dextran with only α-(1→6) linkage. Current study deals with the characterization of an intracellular dextransucrase and in vitro biosynthesis of low molecular weight dextran from L. mesenteroides AA1. Intracellular dextransucrase was extracted from cytoplasm and purified to homogeneity for characterization. Kinetic constants, molecular weight and N-terminal sequence analysis of intracellular dextransucrase reveal unique variation with previously reported extracellular dextransucrase from the same strain. In vitro synthesized biopolymer was characterized using NMR spectroscopic techniques. Intracellular dextransucrase exhibited Vmax and Km values of 130.8 DSU ml-1 hr-1 and 221.3 mM, respectively. Optimum catalytic activity was detected at 35°C in 0.15 M citrate phosphate buffer (pH-5.5) in 05 minutes. Molecular mass of purified intracellular dextransucrase is approximately 220.0 kDa on SDS-PAGE. N-terminal sequence of the intracellular enzyme is: GLPGYFGVN that showed no homology with previously reported sequence for the extracellular dextransucrase. This intracellular dextransucrase is capable of in vitro synthesis of dextran under specific conditions. This intracellular dextransucrase is capable of in vitro synthesis of dextran under specific conditions and this biopolymer can be hydrolyzed into different molecular weight fractions for various applications.Keywords: characterization, dextran, dextransucrase, leuconostoc mesenteroides
Procedia PDF Downloads 3985511 Compliance Of Dialysis patients With Nutrition Guidelines: Insights From A Questionnaire
Authors: Zeiler M., Stadler D., Schmaderer C.
Abstract:
Over the years of dialysis treatment, most patients experience significant weight loss. The primary emphasis in earlier research was the underlying mechanism of protein energy wasting and the subsequent malnutrition inflammation syndrome. In the interest to provide an effective and rapid solution for the patients, the aim of this study is identifying individual influences of their assumed reduced dietary intake, such as nausea, appetite loss and taste changes, and to determine whether the patients adhere to their nutrition guidelines. A prospective, controlled study with 38 end-stage renal disease patients was performed using a questionnaire to reflect their diet within the last 12 months. Thereby, the daily intake for the most important macro-and micronutrients was calculated to be compared with the individual KDQOI-guideline value, as well as controls matched in age and gender. The majority of the study population did not report symptoms commonly associated with dialysis, such as nausea or inappetence, and denied any change in dietary behavior since receiving renal replacement therapy. The patients’ daily intake of energy (3080kcal ± 1266) and protein (89,9g [53,4-142,0]) did not differ significantly from the controls (energy intake: 3233kcal ± 1046, p=0,597; protein intake: 103,7g [90,1-125,5], p=0,120). The average difference to the individual calculated KDQOI-guideline was +176,0kcal ± 1156 (p=0,357) for energy intake and -1,75g ± 45,9 (p=0,491) for protein intake. However, there was an observed imbalance in the distribution of macronutrients, with a preference for fats over proteins. The patients’ daily intake of sodium (5,4g [ 2,95-10,1]) was higher than in the controls (4,1g [2,04-5,99], p= 0,058) whereas both values for potassium (3,7g ± 1,84) and phosphorous (1,79g ± 0,91) went significantly below the controls’ values (potassium intake: 4,89g ± 1,74, p=0,014; phosphorous intake: 2,04g ± 0,64, p=0,038). Thus, the values exceeded the calculated KDQOI-recommendation by + 3,3g [0,63-7,90] (p<0,001) for sodium, +1,49g ± 1,84 (p<0,001) for potassium and +0,89g ± 0,91 (p<0,001) for phosphorous. Contrary to the assumption, the patients did not under-eat. Nevertheless, their diets did not align with the recommended values. These findings highlight the need for intervention and education among patients and that regular dietary monitoring could prevent unhealthy nutrition habits. The elaboration of individual references instead of standardized guidelines could increase the compliance to the advised diet so that interdisciplinary comorbidities do not develop or worsen.Keywords: compliance, dialysis, end-stage renal disease, KDQOI, malnutrition, nutrition guidelines, questionnaire, salt intake
Procedia PDF Downloads 695510 Heterologous Expression of a Clostridium thermocellum Proteins and Assembly of Cellulosomes 'in vitro' for Biotechnology Applications
Authors: Jessica Pinheiro Silva, Brenda Rabello De Camargo, Daniel Gusmao De Morais, Eliane Ferreira Noronha
Abstract:
The utilization of lignocellulosic biomass as source of polysaccharides for industrial applications requires an arsenal of enzymes with different mode of action able to hydrolyze its complex and recalcitrant structure. Clostridium thermocellum is gram-positive, thermophilic bacterium producing lignocellulosic hydrolyzing enzymes in the form of multi-enzyme complex, termed celulossomes. This complex has several hydrolytic enzymes attached to a large and enzymically inactive protein known as Cellulosome-integrating protein (CipA), which serves as a scaffolding protein for the complex produced. This attachment occurs through specific interactions between cohesin modules of CipA and dockerin modules in enzymes. The present work aims to construct celulosomes in vitro with the structural protein CipA, a xylanase called Xyn10D and a cellulose called CelJ from C.thermocellum. A mini-scafoldin was constructed from modules derived from CipA containing two cohesion modules. This was cloned and expressed in Escherichia coli. The other two genes were cloned under the control of the alcohol oxidase 1 promoter (AOX1) in the vector pPIC9 and integrated into the genome of the methylotrophic yeast Pichia pastoris GS115. Purification of each protein is being carried out. Further studies regarding enzymatic activity of the cellulosome is going to be evaluated. The cellulosome built in vitro and composed of mini-CipA, CelJ and Xyn10D, can be very interesting for application in industrial processes involving the degradation of plant biomass.Keywords: cellulosome, CipA, Clostridium thermocellum, cohesin, dockerin, yeast
Procedia PDF Downloads 2355509 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 695508 Ecosystem Model for Environmental Applications
Authors: Cristina Schreiner, Romeo Ciobanu, Marius Pislaru
Abstract:
This paper aims to build a system based on fuzzy models that can be implemented in the assessment of ecological systems, to determine appropriate methods of action for reducing adverse effects on environmental and implicit the population. The model proposed provides new perspective for environmental assessment, and it can be used as a practical instrument for decision-making.Keywords: ecosystem model, environmental security, fuzzy logic, sustainability of habitable regions
Procedia PDF Downloads 4225507 Harnessing the Power of Artificial Intelligence: Advancements and Ethical Considerations in Psychological and Behavioral Sciences
Authors: Nayer Mofidtabatabaei
Abstract:
Advancements in artificial intelligence (AI) have transformed various fields, including psychology and behavioral sciences. This paper explores the diverse ways in which AI is applied to enhance research, diagnosis, therapy, and understanding of human behavior and mental health. We discuss the potential benefits and challenges associated with AI in these fields, emphasizing the ethical considerations and the need for collaboration between AI researchers and psychological and behavioral science experts. Artificial Intelligence (AI) has gained prominence in recent years, revolutionizing multiple industries, including healthcare, finance, and entertainment. One area where AI holds significant promise is the field of psychology and behavioral sciences. AI applications in this domain range from improving the accuracy of diagnosis and treatment to understanding complex human behavior patterns. This paper aims to provide an overview of the various AI applications in psychological and behavioral sciences, highlighting their potential impact, challenges, and ethical considerations. Mental Health Diagnosis AI-driven tools, such as natural language processing and sentiment analysis, can analyze large datasets of text and speech to detect signs of mental health issues. For example, chatbots and virtual therapists can provide initial assessments and support to individuals suffering from anxiety or depression. Autism Spectrum Disorder (ASD) Diagnosis AI algorithms can assist in early ASD diagnosis by analyzing video and audio recordings of children's behavior. These tools help identify subtle behavioral markers, enabling earlier intervention and treatment. Personalized Therapy AI-based therapy platforms use personalized algorithms to adapt therapeutic interventions based on an individual's progress and needs. These platforms can provide continuous support and resources for patients, making therapy more accessible and effective. Virtual Reality Therapy Virtual reality (VR) combined with AI can create immersive therapeutic environments for treating phobias, PTSD, and social anxiety. AI algorithms can adapt VR scenarios in real-time to suit the patient's progress and comfort level. Data Analysis AI aids researchers in processing vast amounts of data, including survey responses, brain imaging, and genetic information. Privacy Concerns Collecting and analyzing personal data for AI applications in psychology and behavioral sciences raise significant privacy concerns. Researchers must ensure the ethical use and protection of sensitive information. Bias and Fairness AI algorithms can inherit biases present in training data, potentially leading to biased assessments or recommendations. Efforts to mitigate bias and ensure fairness in AI applications are crucial. Transparency and Accountability AI-driven decisions in psychology and behavioral sciences should be transparent and subject to accountability. Patients and practitioners should understand how AI algorithms operate and make decisions. AI applications in psychological and behavioral sciences have the potential to transform the field by enhancing diagnosis, therapy, and research. However, these advancements come with ethical challenges that require careful consideration. Collaboration between AI researchers and psychological and behavioral science experts is essential to harness AI's full potential while upholding ethical standards and privacy protections. The future of AI in psychology and behavioral sciences holds great promise, but it must be navigated with caution and responsibility.Keywords: artificial intelligence, psychological sciences, behavioral sciences, diagnosis and therapy, ethical considerations
Procedia PDF Downloads 735506 Modeling Route Selection Using Real-Time Information and GPS Data
Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento
Abstract:
Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.Keywords: behavior choice model, human factors, hybrid model, real time data
Procedia PDF Downloads 1555505 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 1995504 Expansion of Cord Blood Cells Using a Mix of Neurotrophic Factors
Authors: Francisco Dos Santos, Diogo Fonseca-Pereira, Sílvia Arroz-Madeira, Henrique Veiga-Fernandes
Abstract:
Haematopoiesis is a developmental process that generates all blood cell lineages in health and disease. This relies on quiescent haematopoietic stem cells (HSCs) that are able to differentiate, self renew and expand upon physiological demand. HSCs have great interest in regenerative medicine, including haematological malignancies, immunodeficiencies and metabolic disorders. However, the limited yield from existing HSC sources drives the global need for reliable techniques to expand harvested HSCs at high quality and sufficient quantities. With the extensive use of cord blood progenitors for clinical applications, there is a demand for a safe and efficient expansion protocol that is able to overcome the limitations of the cord blood as a source of HSC. StemCell2MAXTM developed a technology that enhances the survival, proliferation and transplantation efficiency of HSC, leading the way to a more widespread use of HSC for research and clinical purposes. StemCell2MAXTM MIX is a solution that improves HSC expansion up to 20x, while preserving stemness, when compared to state-of-the-art. In a recent study by a leading cord blood bank, StemCell2MAX MIX was shown to support a selective 100-fold expansion of CD34+ Hematopoietic Stem and Progenitor Cells (when compared to a 10-fold expansion of Total Nucleated Cells), while maintaining their multipotent differentiative potential as assessed by CFU assays. The technology developed by StemCell2MAXTM opens new horizons for the usage of expanded hematopoietic progenitors for both research purposes (including quality and functional assays in Cord Blood Banks) and clinical applications.Keywords: cord blood, expansion, hematopoietic stem cell, transplantation
Procedia PDF Downloads 2695503 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing
Authors: Kedar Hardikar, Joe Varghese
Abstract:
Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applicationsKeywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.
Procedia PDF Downloads 1365502 The Use of Polar Substituent Groups for Promoting Azo Disperse Dye Solubility and Reactivity for More Economic and Environmental Benign Applications: A Computational Study
Authors: Olaide O. Wahab, Lukman O. Olasunkanmi, Krishna K. Govender, Penny P. Govender
Abstract:
The economic and environmental challenges associated with azo disperse dyes applications are due to poor aqueous solubility and low degradation tendency which stems from low chemical reactivity. Poor aqueous solubility property of this group of dyes necessitates the use of dispersing agents which increase operational costs and also release toxic chemical components into the environment, while their low degradation tendency is due to the high stability of the azo functional group (-N=N-) in their chemical structures. To address these problems, this study investigated theoretically the effects of some polar substituents on the aqueous solubility and reactivity properties of disperse yellow (DY) 119 dye with a view to theoretically develop new azo disperse dyes with improved solubility in water and higher degradation tendency in the environment using DMol³ computational code. All calculations were carried out using the Becke and Perdew version of Volsko-Wilk-Nusair (VWN-BP) level of density functional theory in conjunction with double numerical basis set containing polarization function (DNP). The aqueous solubility determination was achieved with conductor-like screening model for realistic solvation (COSMO-RS) in conjunction with known empirical solubility model, while the reactivity was predicted using frontier molecular orbital calculations. Most of the new derivatives studied showed evidence of higher aqueous solubility and degradation tendency compared to the parent dye. We conclude that these derivatives are promising alternative dyes for more economic and environmental benign dyeing practice and therefore recommend them for synthesis.Keywords: aqueous solubility, azo disperse dye, degradation, disperse yellow 119, DMol³, reactivity
Procedia PDF Downloads 2065501 Experimental Investigation on the Fire Performance of Corrugated Sandwich Panels made from Renewable Material
Authors: Avishek Chanda, Nam Kyeun Kim, Debes Bhattacharyya
Abstract:
The use of renewable substitutes in various semi-structural and structural applications has experienced an increase since the last few decades. Sandwich panels have been used for many decades, although research on understanding the effects of the core structures on the panels’ fire-reaction properties is limited. The current work investigates the fire-performance of a corrugated sandwich panel made from renewable, biodegradable, and sustainable material, plywood. The bench-scale fire testing apparatus, cone-calorimeter, was employed to evaluate the required fire-reaction properties of the sandwich core in a panel configuration, with three corrugated layers glued together with face-sheets under a heat irradiance of 50 kW/m2. The study helped in documenting a unique heat release trend associated with the fire performance of the 3-layered corrugated sandwich panels and in understanding the structural stability of the samples in the event of a fire. Furthermore, the total peak heat release rate was observed to be around 421 kW/m2, which is significantly low compared to many polymeric materials in the literature. The total smoke production was also perceived to be very limited compared to other structural materials, and the total heat release was also nominal. The time to ignition of 21.7 s further outlined the advantages of using the plywood component since polymeric composites, even with flame-retardant additives, tend to ignite faster. Overall, the corrugated plywood sandwich panels had significant fire-reaction properties and could have important structural applications. The possible use of structural panels made from bio-degradable material opens a new avenue for the use of similar structures in sandwich panel preparation.Keywords: corrugated sandwich panel, fire-reaction properties, plywood, renewable material
Procedia PDF Downloads 157