Search results for: heatmap visualization techniques
5149 Implementing Mindfulness into Wellness Plans: Assisting Individuals with Substance Abuse and Addiction
Authors: Michele M. Mahr
Abstract:
The purpose of this study is to educate, inform, and facilitate scholarly conversation and discussion regarding the implementation of mindfulness techniques when working with individuals with substance use disorder (SUD) or addictive behaviors in mental health. Mindfulness can be recognized as the present moment, non-judgmental awareness, initiated by concentrated attention that is non-reactive and as openheartedly as possible. Individuals with SUD or addiction typically are challenged with triggers, environmental situations, cravings, or social pressures which may deter them from remaining abstinent from their drug of choice or addictive behavior. Also, mindfulness is recognized as one of the cognitive and behavioral treatment approaches and is both a physical and mental practice that encompasses individuals to become aware of internal situations and experiences with undivided attention. That said, mindfulness may be an effective strategy for individuals to employ during these experiences. This study will reveal how mental health practitioners and addiction counselors may find mindfulness to be an essential component of increasing wellness when working with individuals seeking mental health treatment. To this end, mindfulness is simply the ability individuals have to know what is actually happening as it is occurring and what they are experiencing at the moment. In the context of substance abuse and addiction, individuals may employ breathing techniques, meditation, and cognitive restructuring of the mind to become aware of present moment experiences. Furthermore, the notion of mindfulness has been directly connected to the development of neuropathways. The creation of the neural pathways then leads to creating thoughts which leads to developing new coping strategies and adaptive behaviors. Mindfulness strategies can assist individuals in connecting the mind with the body, allowing the individual to remain centered and focused. All of these mentioned above are vital components to recovery during substance abuse and addiction treatment. There are a variety of therapeutic modalities applying the key components of mindfulness, such as Mindfulness-Based Stress Reduction (MBSR) and Mindfulness-Based Cognitive Therapy for depression (MBCT). This study will provide an overview of both MBSR and MBCT in relation to treating individuals with substance abuse and addiction. The author will also provide strategies for readers to employ when working with clients. Lastly, the author will create and foster a safe space for discussion and engaging conversation among participants to ask questions, share perspectives, and be educated on the numerous benefits of mindfulness within wellness.Keywords: mindfulness, wellness, substance abuse, mental health
Procedia PDF Downloads 795148 Clinician's Perspective of Common Factors of Change in Family Therapy: A Cross-National Exploration
Authors: Hassan Karimi, Fred Piercy, Ruoxi Chen, Ana L. Jaramillo-Sierra, Wei-Ning Chang, Manjushree Palit, Catherine Martosudarmo, Angelito Antonio
Abstract:
Background: The two psychotherapy camps, the randomized clinical trials (RCTs) and the common factors model, have competitively claimed specific explanations for therapy effectiveness. Recently, scholars called for empirical evidence to show the role of common factors in therapeutic outcome in marriage and family therapy. Purpose: This cross-national study aims to explore how clinicians, across different nations and theoretical orientations, attribute the contribution of common factors to therapy outcome. Method: A brief common factors questionnaire (CFQ-with a Cronbach’s Alpha, 0.77) was developed and administered in seven nations. A series of statistical analyses (paired-samples t-test, independent sample t-test, ANOVA) were conducted: to compare clinicians perceived contribution of total common factors versus model-specific factors, to compare each pair of common factors’ categories, and to compare clinicians from collectivistic nations versus clinicians from individualistic nation. Results: Clinicians across seven nations attributed 86% to common factors versus 14% to model-specific factors. Clinicians attributed 34% of therapeutic change to client’s factors, 26% to therapist’s factors, 26% to relationship factors, and 14% to model-specific techniques. The ANOVA test indicated each of the three categories of common factors (client 34%, therapist 26%, relationship 26%) showed higher contribution in therapeutic outcome than the category of model specific factors (techniques 14%). Clinicians with psychology degree attributed more contribution to model-specific factors than clinicians with MFT and counseling degrees who attributed more contribution to client factors. Clinicians from collectivistic nations attributed larger contributions to therapist’s factors (M=28.96, SD=12.75) than the US clinicians (M=23.22, SD=7.73). The US clinicians attributed a larger contribution to client’s factors (M=39.02, SD=1504) than clinicians from the collectivistic nations (M=28.71, SD=15.74). Conclusion: The findings indicate clinicians across the globe attributed more than two thirds of therapeutic change to CFs, which emphasize the training of the common factors model in the field. CFs, like model-specific factors, vary in their contribution to therapy outcome in relation to specific client, therapist, problem, treatment model, and sociocultural context. Sociocultural expectations and norms should be considered as a context in which both CFs and model-specific factors function toward therapeutic goals. Clinicians need to foster a cultural competency specifically regarding the divergent ways that CFs can be activated due to specific sociocultural values.Keywords: common factors, model-specific factors, cross-national survey, therapist cultural competency, enhancing therapist efficacy
Procedia PDF Downloads 2885147 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 1375146 Low Power, Highly Linear, Wideband LNA in Wireless SOC
Authors: Amir Mahdavi
Abstract:
In this paper a highly linear CMOS low noise amplifier (LNA) for ultra-wideband (UWB) applications is proposed. The proposed LNA uses a linearization technique to improve second and third-order intercept points (IIP3). The linearity is cured by repealing the common-mode section of all intermodulation components from the cascade topology current with optimization of biasing current use symmetrical and asymmetrical circuits for biasing. Simulation results show that maximum gain and noise figure are 6.9dB and 3.03-4.1dB over a 3.1–10.6 GHz, respectively. Power consumption of the LNA core and IIP3 are 2.64 mW and +4.9dBm respectively. The wideband input impedance matching of LNA is obtained by employing a degenerating inductor (|S11|<-9.1 dB). The circuit proposed UWB LNA is implemented using 0.18 μm based CMOS technology.Keywords: highly linear LNA, low-power LNA, optimal bias techniques
Procedia PDF Downloads 2815145 Synthesis and Characterization of Zeolite/Fe3O4 Nanocomposite Material and Investigation of Its Catalytic Reaction
Authors: Mojgan Zendehdel, Safura Molla Mohammad Zamani
Abstract:
In this paper, Fe3O4/NaY zeolite nanocomposite with different molar ratio were successfully synthesized and characterized using FT-IR, XRD, TGA, SEM and VSM techniques. The SEM graphs showed that much of Fe3O4 was successfully coated by the NaY zeolite layer. Also, the results show that the magnetism of the products is stable with added zeolite. The catalytic effect of nanocomposite investigated for esterification reaction under solvent-free conditions. Hence, the effect of the catalyst amount, reaction time, reaction temperature and reusability of catalyst were considered and nanocomposite that created from zeolite and 16.6 percent of Fe3O4 showed the highest yield. The catalyst can be easily separated from reaction with the magnet and it can also be used for several times.Keywords: zeolite, magnetic, nanocompsite, esterification
Procedia PDF Downloads 4625144 Removal Cobalt (II) and Copper (II) by Solvent Extraction from Sulfate Solutions by Capric Acid in Chloroform
Abstract:
Liquid-liquid extraction is one of the most useful techniques for selective removal and recovery of metal ions from aqueous solutions, applied in purification processes in numerous chemical and metallurgical industries. In this work, The liquid-liquid extraction of cobalt (II) and copper (II) from aqueous solution by capric acid (HL) in chloroform at 25°C has been studied. Our interest in this paper is to study the effect of concentration of capric acid on the extraction of Co(II) and Cu(II) to see the complexes could be formed in the organic phase using various concentration of capric acid. The extraction of cobalt (II) and copper (II) is extracted as the complex CoL2 (HL )2, CuL2 (HL)2.Keywords: capric acid, Cobalt(II), copper(II), liquid-liquid extraction
Procedia PDF Downloads 4415143 Circular Approximation by Trigonometric Bézier Curves
Authors: Maria Hussin, Malik Zawwar Hussain, Mubashrah Saddiqa
Abstract:
We present a trigonometric scheme to approximate a circular arc with its two end points and two end tangents/unit tangents. A rational cubic trigonometric Bézier curve is constructed whose end control points are defined by the end points of the circular arc. Weight functions and the remaining control points of the cubic trigonometric Bézier curve are estimated by variational approach to reproduce a circular arc. The radius error is calculated and found less than the existing techniques.Keywords: control points, rational trigonometric Bézier curves, radius error, shape measure, weight functions
Procedia PDF Downloads 4775142 Developing Confidence of Visual Literacy through Using MIRO during Online Learning
Authors: Rachel S. E. Lim, Winnie L. C. Tan
Abstract:
Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.Keywords: design education, graphic communication, online learning, visual literacy
Procedia PDF Downloads 1145141 Environmental Pollution and Treatment Technology
Authors: R. Berrached, H. Ait Mahamed, A. Iddou
Abstract:
Water pollution is nowadays a serious problem, due to the increasing scarcity of water and thus to the impact induced by such pollution on the human health. Various techniques are made use of to deal with water pollution. Among the most used ones, some can be enumerated: the bacterian bed, the activated mud, the Lagunage as biological processes and coagulation-floculation as a physic-chemical process. These processes are very expensive and an treatment efficiency which decreases along with the increase of the initial pollutants’ concentration. This is the reason why research has been reoriented towards the use of a process by adsorption as an alternative solution instead of the other traditional processes. In our study, we have tempted to exploit the characteristics of two metallic hydroxides Al and Fe to purify contaminated water by two industrial dyes SBL blue and SRL-150 orange. Results have shown the efficiency of the two materials on the blue SBL dye.Keywords: metallic hydroxydes, industrial dyes, purificatıon,
Procedia PDF Downloads 3255140 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan
Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen
Abstract:
In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.Keywords: automation, integration, value, communication
Procedia PDF Downloads 1465139 Hierarchical Zeolites as Catalysts for Cyclohexene Epoxidation Reactions
Authors: Agnieszka Feliczak-Guzik, Paulina Szczyglewska, Izabela Nowak
Abstract:
A catalyst-assisted oxidation reaction is one of the key reactions exploited by various industries. Their conductivity yields essential compounds and intermediates, such as alcohols, epoxides, aldehydes, ketones, and organic acids. Researchers are devoting more and more attention to developing active and selective materials that find application in many catalytic reactions, such as cyclohexene epoxidation. This reaction yields 1,2-epoxycyclohexane and 1,2-diols as the main products. These compounds are widely used as intermediates in the perfume industry and synthesizing drugs and lubricants. Hence, our research aimed to use hierarchical zeolites modified with transition metal ions, e.g., Nb, V, and Ta, in the epoxidation reaction of cyclohexene using microwaveheating. Hierarchical zeolites are materials with secondary porosity, mainly in the mesoporous range, compared to microporous zeolites. In the course of the research, materials based on two commercial zeolites, with Faujasite (FAU) and Zeolite Socony Mobil-5 (ZSM-5) structures, were synthesized and characterized by various techniques, such as X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and low-temperature nitrogen adsorption/desorption isotherms. The materials obtained were then used in a cyclohexene epoxidation reaction, which was carried out as follows: catalyst (0.02 g), cyclohexene (0.1 cm3), acetonitrile (5 cm3) and dihydrogen peroxide (0.085 cm3) were placed in a suitable glass reaction vessel with a magnetic stirrer inside in a microwave reactor. Reactions were carried out at 45° C for 6 h (samples were taken every 1 h). The reaction mixtures were filtered to separate the liquid products from the solid catalyst and then transferred to 1.5 cm3 vials for chromatographic analysis. The test techniques confirmed the acquisition of additional secondary porosity while preserving the structure of the commercial zeolite (XRD and low-temperature nitrogen adsorption/desorption isotherms). The results of the activity of the hierarchical catalyst modified with niobium in the cyclohexene epoxidation reaction indicate that the conversion of cyclohexene, after 6 h of running the process, is about 70%. As the main product of the reaction, 2-cyclohexanediol was obtained (selectivity > 80%). In addition to the mentioned product, adipic acid, cyclohexanol, cyclohex-2-en-1-one, and 1,2-epoxycyclohexane were also obtained. Furthermore, in a blank test, no cyclohexene conversion was obtained after 6 h of reaction. Acknowledgments The work was carried out within the project “Advanced biocomposites for tomorrow’s economy BIOG-NET,” funded by the Foundation for Polish Science from the European Regional Development Fund (POIR.04.04.00-00-1792/18-00.Keywords: epoxidation, oxidation reactions, hierarchical zeolites, synthesis
Procedia PDF Downloads 785138 Analyzing a Tourism System by Bifurcation Theory
Authors: Amin Behradfar
Abstract:
Tourism has a direct impact on the national revenue for all touristic countries. It creates work opportunities, industries, and several investments to serve and raise nations performance and cultures. This paper is devoted to analyze dynamical behaviour of a four-dimensional non-linear tourism-based social-ecological system by using the codimension two bifurcation theory. In fact we investigate the cusp bifurcation of that. Implications of our mathematical results to the tourism industry are discussed. Moreover, profitability, compatibility and sustainability of the tourism system are shown by the aid of cusp bifurcation and numerical techniques.Keywords: tourism-based social-ecological dynamical systems, cusp bifurcation, center manifold theory, profitability, compatibility, sustainability
Procedia PDF Downloads 5035137 Foundation Settlement Determination: A Simplified Approach
Authors: Adewoyin O. Olusegun, Emmanuel O. Joshua, Marvel L. Akinyemi
Abstract:
The heterogeneous nature of the subsurface requires the use of factual information to deal with rather than assumptions or generalized equations. Therefore, there is need to determine the actual rate of settlement possible in the soil before structures are built on it. This information will help in determining the type of foundation design and the kind of reinforcement that will be necessary in constructions. This paper presents a simplified and a faster approach for determining foundation settlement in any type of soil using real field data acquired from seismic refraction techniques and cone penetration tests. This approach was also able to determine the depth of settlement of each strata of soil. The results obtained revealed the different settlement time and depth of settlement possible.Keywords: heterogeneous, settlement, foundation, seismic, technique
Procedia PDF Downloads 4455136 The Risk of Ground Movements After Digging Two Parallel Vertical Tunnel in Urban
Authors: Djelloul Chafia, Demagh Rafik, Kareche Toufik
Abstract:
Human activities, made without precautions, accelerate the degradation of the soil structure and reduces its resistance. Operations, such as tunnel construction may exercise an influence more or less permanent on the grounds which surrounded them, these structures alter soil it is necessary to predict their impacts by suitable measures. This research is a numerical analysis that deals the risks and effects due to the weakening of the soil after digging two parallel vertical circular tunnels in urban areas, and suggests forecasting techniques based essentially on the organization of underground space. The simulations are performed using the finite-difference code FLAC in a two-dimensional case and with an elasto-plastic behavior of the soil.Keywords: sol, weakening, degradation, prevention, tunnel
Procedia PDF Downloads 5575135 Survey on Big Data Stream Classification by Decision Tree
Authors: Mansoureh Ghiasabadi Farahani, Samira Kalantary, Sara Taghi-Pour, Mahboubeh Shamsi
Abstract:
Nowadays, the development of computers technology and its recent applications provide access to new types of data, which have not been considered by the traditional data analysts. Two particularly interesting characteristics of such data sets include their huge size and streaming nature .Incremental learning techniques have been used extensively to address the data stream classification problem. This paper presents a concise survey on the obstacles and the requirements issues classifying data streams with using decision tree. The most important issue is to maintain a balance between accuracy and efficiency, the algorithm should provide good classification performance with a reasonable time response.Keywords: big data, data streams, classification, decision tree
Procedia PDF Downloads 5225134 Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach
Authors: Joseph C. Chen
Abstract:
Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use.Keywords: DMAIC, machine vision system, process capability, Taguchi Parameter Design
Procedia PDF Downloads 4405133 Characterization of Particle Charge from Aerosol Generation Process: Impact on Infrared Signatures and Material Reactivity
Authors: Erin M. Durke, Monica L. McEntee, Meilu He, Suresh Dhaniyala
Abstract:
Aerosols are one of the most important and significant surfaces in the atmosphere. They can influence weather, absorption, and reflection of light, and reactivity of atmospheric constituents. A notable feature of aerosol particles is the presence of a surface charge, a characteristic imparted via the aerosolization process. The existence of charge can complicate the interrogation of aerosol particles, so many researchers remove or neutralize aerosol particles before characterization. However, the charge is present in real-world samples, and likely has an effect on the physical and chemical properties of an aerosolized material. In our studies, we aerosolized different materials in an attempt to characterize the charge imparted via the aerosolization process and determine what impact it has on the aerosolized materials’ properties. The metal oxides, TiO₂ and SiO₂, were aerosolized expulsively and then characterized, using several different techniques, in an effort to determine the surface charge imparted upon the particles via the aerosolization process. Particle charge distribution measurements were conducted via the employment of a custom scanning mobility particle sizer. The results of the charge distribution measurements indicated that expulsive generation of 0.2 µm SiO₂ particles produced aerosols with upwards of 30+ charges on the surface of the particle. Determination of the degree of surface charging led to the use of non-traditional techniques to explore the impact of additional surface charge on the overall reactivity of the metal oxides, specifically TiO₂. TiO₂ was aerosolized, again expulsively, onto a gold-coated tungsten mesh, which was then evaluated with transmission infrared spectroscopy in an ultra-high vacuum environment. The TiO₂ aerosols were exposed to O₂, H₂, and CO, respectively. Exposure to O₂ resulted in a decrease in the overall baseline of the aerosol spectrum, suggesting O₂ removed some of the surface charge imparted during aerosolization. Upon exposure to H₂, there was no observable rise in the baseline of the IR spectrum, as is typically seen for TiO₂, due to the population of electrons into the shallow trapped states and subsequent promotion of the electrons into the conduction band. This result suggests that the additional charge imparted via aerosolization fills the trapped states, therefore no rise is seen upon exposure to H₂. Dosing the TiO₂ aerosols with CO showed no adsorption of CO on the surface, even at lower temperatures (~100 K), indicating the additional charge on the aerosol surface prevents the CO molecules from adsorbing to the TiO₂ surface. The results observed during exposure suggest that the additional charge imparted via aerosolization impacts the interaction with each probe gas.Keywords: aerosols, charge, reactivity, infrared
Procedia PDF Downloads 1235132 Pre-Shared Key Distribution Algorithms' Attacks for Body Area Networks: A Survey
Authors: Priti Kumari, Tricha Anjali
Abstract:
Body Area Networks (BANs) have emerged as the most promising technology for pervasive health care applications. Since they facilitate communication of very sensitive health data, information leakage in such networks can put human life at risk, and hence security inside BANs is a critical issue. Safe distribution and periodic refreshment of cryptographic keys are needed to ensure the highest level of security. In this paper, we focus on the key distribution techniques and how they are categorized for BAN. The state-of-art pre-shared key distribution algorithms are surveyed. Possible attacks on algorithms are demonstrated with examples.Keywords: attacks, body area network, key distribution, key refreshment, pre-shared keys
Procedia PDF Downloads 3665131 Identification of Suitable Sites for Rainwater Harvesting in Salt Water Intruded Area by Using Geospatial Techniques in Jafrabad, Amreli District, India
Authors: Pandurang Balwant, Ashutosh Mishra, Jyothi V., Abhay Soni, Padmakar C., Rafat Quamar, Ramesh J.
Abstract:
The sea water intrusion in the coastal aquifers has become one of the major environmental concerns. Although, it is a natural phenomenon but, it can be induced with anthropogenic activities like excessive exploitation of groundwater, seacoast mining, etc. The geological and hydrogeological conditions including groundwater heads and groundwater pumping pattern in the coastal areas also influence the magnitude of seawater intrusion. However, this problem can be remediated by taking some preventive measures like rainwater harvesting and artificial recharge. The present study is an attempt to identify suitable sites for rainwater harvesting in salt intrusion affected area near coastal aquifer of Jafrabad town, Amreli district, Gujrat, India. The physico-chemical water quality results show that out of 25 groundwater samples collected from the study area most of samples were found to contain high concentration of Total Dissolved Solids (TDS) with major fractions of Na and Cl ions. The Cl/HCO3 ratio was also found greater than 1 which indicates the salt water contamination in the study area. The geophysical survey was conducted at nine sites within the study area to explore the extent of contamination of sea water. From the inverted resistivity sections, low resistivity zone (<3 Ohm m) associated with seawater contamination were demarcated in North block pit and south block pit of NCJW mines, Mitiyala village Lotpur and Lunsapur village at the depth of 33 m, 12 m, 40 m, 37 m, 24 m respectively. Geospatial techniques in combination of Analytical Hierarchy Process (AHP) considering hydrogeological factors, geographical features, drainage pattern, water quality and geophysical results for the study area were exploited to identify potential zones for the Rainwater Harvesting. Rainwater harvesting suitability model was developed in ArcGIS 10.1 software and Rainwater harvesting suitability map for the study area was generated. AHP in combination of the weighted overlay analysis is an appropriate method to identify rainwater harvesting potential zones. The suitability map can be further utilized as a guidance map for the development of rainwater harvesting infrastructures in the study area for either artificial groundwater recharge facilities or for direct use of harvested rainwater.Keywords: analytical hierarchy process, groundwater quality, rainwater harvesting, seawater intrusion
Procedia PDF Downloads 1755130 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites
Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria
Abstract:
Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles
Procedia PDF Downloads 2625129 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 3485128 Determination of Myocardial Function Using Heart Accumulated Radiopharmaceuticals
Authors: C. C .D. Kulathilake, M. Jayatilake, T. Takahashi
Abstract:
The myocardium is composed of specialized muscle which relies mainly on fatty acid and sugar metabolism and it is widely contribute to the heart functioning. The changes of the cardiac energy-producing system during heart failure have been proved using autoradiography techniques. This study focused on evaluating sugar and fatty acid metabolism in myocardium as cardiac energy getting system using heart-accumulated radiopharmaceuticals. Two sets of autoradiographs of heart cross sections of Lewis male rats were analyzed and the time- accumulation curve obtained with use of the MATLAB image processing software to evaluate fatty acid and sugar metabolic functions.Keywords: autoradiographs, fatty acid, radiopharmaceuticals, sugar
Procedia PDF Downloads 4525127 Shakespeare's Hamlet in Ballet: Transformation of an Archival Recording of a Neoclassical Ballet Performance into a Contemporary Transmodern Dance Video Applying Postmodern Concepts and Techniques
Authors: Svebor Secak
Abstract:
This four-year artistic research project hosted by the University of New England, Australia has set the goal to experiment with non-conventional ways of presenting a language-based narrative in dance using insights of recent theoretical writing on performance, addressing the research question: How to transform an archival recording of a neoclassical ballet performance into a new artistic dance video by implementing postmodern philosophical concepts? The Creative Practice component takes the form of a dance video Hamlet Revisited which is a reworking of the archival recording of the neoclassical ballet Hamlet, augmented by new material, produced using resources, technicians and dancers of the Croatian National Theatre in Zagreb. The methodology for the creation of Hamlet Revisited consisted of extensive field and desk research after which three dancers were shown the recording of original Hamlet and then created their artistic response to it based on their reception and appreciation of it. The dancers responded differently, based upon their diverse dancing backgrounds and life experiences. They began in the role of the audience observing video of the original ballet and transformed into the role of the choreographer-performer. Their newly recorded material was edited and juxtaposed with the archival recording of Hamlet and other relevant footage, allowing for postmodern features such as aleatoric content, synchronicity, eclecticism and serendipity, that way establishing communication on a receptive reader-response basis, thus blending the roles of the choreographer, performer and spectator, creating an original work of art whose significance lies in the relationship and communication between styles, old and new choreographic approaches, artists and audiences and the transformation of their traditional roles and relationships. In editing and collating, the following techniques were used with the intention to avoid the singular narrative: fragmentation, repetition, reverse-motion, multiplication of images, split screen, overlaying X-rays, image scratching, slow-motion, freeze-frame and simultaneity. Key postmodern concepts considered were: deconstruction, diffuse authorship, supplementation, simulacrum, self-reflexivity, questioning the role of the author, intertextuality and incredulity toward grand narratives - departing from the original story, thus personalising its ontological themes. From a broad brush of diverse concepts and techniques applied in an almost prescriptive manner, the project focuses on intertextuality that proves to be valid on at least two levels. The first is the possibility of a more objective analysis in combination with a semiotic structuralist approach moving from strict relationships between signs to a multiplication of signifiers, considering the dance text as an open construction, containing the elusive and enigmatic quality of art that leaves the interpretive position open. The second one is the creation of the new work where the author functions as the editor, aware and conscious of the interplay of disparate texts and their sources which co-act in the mind during the creative process. It is argued here that the eclectic combination of the old and new material through constant oscillations of different discourses upon the same topic resulted in a transmodern integrationist recent work of art that might be applied as a model for reconsidering existing choreographic creations.Keywords: Ballet Hamlet, intertextuality, transformation, transmodern dance video
Procedia PDF Downloads 2585126 Cubical Representation of Prime and Essential Prime Implicants of Boolean Functions
Authors: Saurabh Rawat, Anushree Sah
Abstract:
K Maps are generally and ideally, thought to be simplest form for obtaining solution of Boolean equations. Cubical Representation of Boolean equations is an alternate pick to incur a solution, otherwise to be meted out with Truth Tables, Boolean Laws, and different traits of Karnaugh Maps. Largest possible k- cubes that exist for a given function are equivalent to its prime implicants. A technique of minimization of Logic functions is tried to be achieved through cubical methods. The main purpose is to make aware and utilise the advantages of cubical techniques in minimization of Logic functions. All this is done with an aim to achieve minimal cost solution.rKeywords: K-maps, don’t care conditions, Boolean equations, cubes
Procedia PDF Downloads 3865125 Approaches to Diagnosis of Ectopic Solid Organs in the Abdominopelvic Cavity
Authors: Van-Ngoc-Cuong Le, Ngoc-Quy Le
Abstract:
Approaches to the diagnosis of ectopic solid organs in the abdominopelvic cavity include Accessory liver lobe, Accessory spleens (ectopic splenic tissue), Wandering spleen, Ectopic pancreatic tissue, Ectopic kidney (Pancake kidney), Cryptorchidism (undescended testis, ectopic testis), Ectopic endometriosis. The application of diagnostic imaging techniques, of which magnetic resonance imaging is the most important, includes a clinical case study and reports. Ectopic organs and tumors are easy to confuse. This is a concern, as well as practical challenges encountered and solutions adopted in the fields of Image Analysis.Keywords: ectopic, accessory, wandering, tumor
Procedia PDF Downloads 115124 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring
Authors: Younghoon Kim, Seoung Bum Kim
Abstract:
One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.Keywords: control chart, mixed integer programming, one-class classification, support vector data description
Procedia PDF Downloads 1745123 The Corrosion Resistance of the 32CrMoV13 Steel Nitriding
Authors: Okba Belahssen, Lazhar Torchane, Said Benramache, Abdelouahed Chala
Abstract:
This paper presents corrosion behavior of the plasma-nitrided 32CrMoV13 steel. Different kinds of samples were tested: non-treated, plasma nitrided samples. The structure of layers was determined by X-ray diffraction, while the morphology was observed by scanning electron microscopy (SEM). The corrosion behavior was evaluated by electrochemical techniques (potentiodynamic curves and electrochemical impedance spectroscopy). The corrosion tests were carried out in acid chloride solution (HCl 1M). Experimental results showed that the nitrides ε-Fe2−3N and γ′-Fe4N present in the white layer are nobler than the substrate but may promote, by galvanic effect, a localized corrosion through open porosity. The better corrosion protection was observed for nitrided sample.Keywords: plasma-nitrided, 32CrMoV13 steel, corrosion, EIS
Procedia PDF Downloads 5885122 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 605121 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation
Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos
Abstract:
One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).Keywords: code generation, MATLAB, tunable parameters, TwinCAT
Procedia PDF Downloads 2285120 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective
Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou
Abstract:
The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.Keywords: mortality map, spatial patterns, statistical area, variation
Procedia PDF Downloads 260