Search results for: operational approach
11236 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study
Authors: Anna Ching-Yu Wong
Abstract:
Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.Keywords: textbook savings, open textbooks, textbook costs assessment, open access
Procedia PDF Downloads 7511235 A Phenomenological Method Based on Professional Descriptions of Community-of-Practice Members to Scientifically Determine the Level of Child Psycho-Social-Emotional Development
Authors: Gianni Jacucci
Abstract:
Alfred Schutz (1932), at the very turning towards phenomenology, of the attention for the social sciences, stated that successful communication of meanings requires the sharing of “sedimenta-tions “ of previous meanings. Börje Langefors (1966), at the very beginning of the social studies of information systems, stated that a common professional basis is required for a correct sharing of meanings, e. g., “standardised accounting data among accountants”. Harold Garfinkel (1967), at the very beginning of ethnomethodology, stated that the accounting of social events must be carried out in the same language used by the actors of those events in managing their practice. Community of practice: we advocate professional descriptions of the community of practice members to scientifically determine the level of child psycho social emotional development. Our approach consists of an application to Human Sciences of Husserl’s Phenomenological Philosophy using a method reminder of Giorgi’s DPM in Psychology. Husserl’s requirement of "Epoché," which involves eliminating prejudices from the minds of observers, is met through "concept cleaning," achieved by consistently sharing disciplinary concepts within their community of practice. Mean-while, the absence of subjective bias is ensured by the meticulous attention to detail in their professional expertise. Our approach shows promise in accurately assessing many other properties through detailed professional descriptions of the community of practice members.Keywords: scientific rigour, descriptive phenomenological method, sedimentation of meanings, community of practice
Procedia PDF Downloads 5711234 Cocrystal of Mesalamine for Enhancement of Its Biopharmaceutical Properties, Utilizing Supramolecular Chemistry Approach
Authors: Akshita Jindal, Renu Chadha, Maninder Karan
Abstract:
Supramolecular chemistry has gained recent eminence in a flurry of research documents demonstrating the formation of new crystalline forms with potentially advantageous characteristics. Mesalamine (5-amino salicylic acid) belongs to anti-inflammatory class of drugs, is used to treat ulcerative colitis and Crohn’s disease. Unfortunately, mesalamine suffer from poor solubility and therefore very low bioavailability. This work is focused on preparation and characterization of cocrystal of mesalamine with nicotinamide (MNIC) a coformer of GRAS status. Cocrystallisation was achieved by solvent drop grinding in stoichiometric ratio of 1:1 using acetonitrile as solvent and was characterized by various techniques including DSC (Differential Scanning Calorimetry), PXRD (X-ray Powder Diffraction), and FTIR (Fourier Transform Infrared Spectrometer). The co-crystal depicted single endothermic transitions (254°C) which were different from the melting peaks of both drug (288°C) and coformer (128°C) indicating the formation of a new solid phase. Different XRPD patterns and FTIR spectrums for the co-crystals from those of individual components confirms the formation of new phase. Enhancement in apparent solubility study and intrinsic dissolution study showed effectiveness of this cocrystal. Further improvement in pharmacokinetic profile has also been observed with 2 folds increase in bioavailability. To conclude, our results show that application of nicotinamide as a coformer is a viable approach towards the preparation of cocrystals of potential drug molecule having limited solubility.Keywords: cocrystal, mesalamine, nicotinamide, solvent drop grinding
Procedia PDF Downloads 17711233 Pricing Strategy in Marketing: Balancing Value and Profitability
Authors: Mohsen Akhlaghi, Tahereh Ebrahimi
Abstract:
Pricing strategy is a vital component in achieving the balance between customer value and business profitability. The aim of this study is to provide insights into the factors, techniques, and approaches involved in pricing decisions. The study utilizes a descriptive approach to discuss various aspects of pricing strategy in marketing, drawing on concepts from market research, consumer psychology, competitive analysis, and adaptability. This approach presents a comprehensive view of pricing decisions. The result of this exploration is a framework that highlights key factors influencing pricing decisions. The study examines how factors such as market positioning, product differentiation, and brand image shape pricing strategies. Additionally, it emphasizes the role of consumer psychology in understanding price elasticity, perceived value, and price-quality associations that influence consumer behavior. Various pricing techniques, including charm pricing, prestige pricing, and bundle pricing, are mentioned as methods to enhance sales by influencing consumer perceptions. The study also underscores the importance of adaptability in responding to market dynamics through regular price monitoring, dynamic pricing, and promotional strategies. It recognizes the role of digital platforms in enabling personalized pricing and dynamic pricing models. In conclusion, the study emphasizes that effective pricing strategies strike a balance between customer value and business profitability, ultimately driving sales, enhancing brand perception, and fostering lasting customer relationships.Keywords: business, customer benefits, marketing, pricing
Procedia PDF Downloads 7911232 Characteristics and Flight Test Analysis of a Fixed-Wing UAV with Hover Capability
Authors: Ferit Çakıcı, M. Kemal Leblebicioğlu
Abstract:
In this study, characteristics and flight test analysis of a fixed-wing unmanned aerial vehicle (UAV) with hover capability is analyzed. The base platform is chosen as a conventional airplane with throttle, ailerons, elevator and rudder control surfaces, that inherently allows level flight. Then this aircraft is mechanically modified by the integration of vertical propellers as in multi rotors in order to provide hover capability. The aircraft is modeled using basic aerodynamical principles and linear models are constructed utilizing small perturbation theory for trim conditions. Flight characteristics are analyzed by benefiting from linear control theory’s state space approach. Distinctive features of the aircraft are discussed based on analysis results with comparison to conventional aircraft platform types. A hybrid control system is proposed in order to reveal unique flight characteristics. The main approach includes design of different controllers for different modes of operation and a hand-over logic that makes flight in an enlarged flight envelope viable. Simulation tests are performed on mathematical models that verify asserted algorithms. Flight tests conducted in real world revealed the applicability of the proposed methods in exploiting fixed-wing and rotary wing characteristics of the aircraft, which provide agility, survivability and functionality.Keywords: flight test, flight characteristics, hybrid aircraft, unmanned aerial vehicle
Procedia PDF Downloads 32911231 Optimized Design, Material Selection, and Improvement of Liners, Mother Plate, and Stone Box of a Direct Charge Transfer Chute in a Sinter Plant: A Computational Approach
Authors: Anamitra Ghosh, Neeladri Paul
Abstract:
The present work aims at investigating material combinations and thereby improvising an optimized design of liner-mother plate arrangement and that of the stone box, such that it has low cost, high weldability, sufficiently capable of withstanding the increased amount of corrosive shear and bending loads, and having reduced thermal expansion coefficient at temperatures close to 1000 degrees Celsius. All the above factors have been preliminarily examined using a computational approach via ANSYS Thermo-Structural Computation, a commercial software that uses the Finite Element Method to analyze the response of simulated design specimens of liner-mother plate arrangement and the stone box, to varied bending, shear, and thermal loads as well as to determine the temperature gradients developed across various surfaces of the designs. Finally, the optimized structural designs of the liner-mother plate arrangement and that of the stone box with improved material and better structural and thermal properties are selected via trial-and-error method. The final improvised design is therefore considered to enhance the overall life and reliability of a Direct Charge Transfer Chute that transfers and segregates the hot sinter onto the cooler in a sinter plant.Keywords: shear, bending, thermal, sinter, simulated, optimized, charge, transfer, chute, expansion, computational, corrosive, stone box, liner, mother plate, arrangement, material
Procedia PDF Downloads 10911230 Synthesis, Characterization and Photocatalytic Applications of Ag-Doped-SnO₂ Nanoparticles by Sol-Gel Method
Authors: M. S. Abd El-Sadek, M. A. Omar, Gharib M. Taha
Abstract:
In recent years, photocatalytic degradation of various kinds of organic and inorganic pollutants using semiconductor powders as photocatalysts has been extensively studied. Owing to its relatively high photocatalytic activity, biological and chemical stability, low cost, nonpoisonous and long stable life, Tin oxide materials have been widely used as catalysts in chemical reactions, including synthesis of vinyl ketone, oxidation of methanol and so on. Tin oxide (SnO₂), with a rutile-type crystalline structure, is an n-type wide band gap (3.6 eV) semiconductor that presents a proper combination of chemical, electronic and optical properties that make it advantageous in several applications. In the present work, SnO₂ nanoparticles were synthesized at room temperature by the sol-gel process and thermohydrolysis of SnCl₂ in isopropanol by controlling the crystallite size through calculations. The synthesized nanoparticles were identified by using XRD analysis, TEM, FT-IR, and Uv-Visible spectroscopic techniques. The crystalline structure and grain size of the synthesized samples were analyzed by X-Ray diffraction analysis (XRD) and the XRD patterns confirmed the presence of tetragonal phase SnO₂. In this study, Methylene blue degradation was tested by using SnO₂ nanoparticles (at different calculations temperatures) as a photocatalyst under sunlight as a source of irradiation. The results showed that the highest percentage of degradation of Methylene blue dye was obtained by using SnO₂ photocatalyst at calculations temperature 800 ᵒC. The operational parameters were investigated to be optimized to the best conditions which result in complete removal of organic pollutants from aqueous solution. It was found that the degradation of dyes depends on several parameters such as irradiation time, initial dye concentration, the dose of the catalyst and the presence of metals such as silver as a dopant and its concentration. Percent degradation was increased with irradiation time. The degradation efficiency decreased as the initial concentration of the dye increased. The degradation efficiency increased as the dose of the catalyst increased to a certain level and by further increasing the SnO₂ photocatalyst dose, the degradation efficiency is decreased. The best degradation efficiency on which obtained from pure SnO₂ compared with SnO₂ which doped by different percentage of Ag.Keywords: SnO₂ nanoparticles, a sol-gel method, photocatalytic applications, methylene blue, degradation efficiency
Procedia PDF Downloads 15211229 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit
Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira
Abstract:
Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing
Procedia PDF Downloads 14311228 Effect of Mach Number for Gust-Airfoil Interatcion Noise
Authors: ShuJiang Jiang
Abstract:
The interaction of turbulence with airfoil is an important noise source in many engineering fields, including helicopters, turbofan, and contra-rotating open rotor engines, where turbulence generated in the wake of upstream blades interacts with the leading edge of downstream blades and produces aerodynamic noise. One approach to study turbulence-airfoil interaction noise is to model the oncoming turbulence as harmonic gusts. A compact noise source produces a dipole-like sound directivity pattern. However, when the acoustic wavelength is much smaller than the airfoil chord length, the airfoil needs to be treated as a non-compact source, and the gust-airfoil interaction becomes more complicated and results in multiple lobes generated in the radiated sound directivity. Capturing the short acoustic wavelength is a challenge for numerical simulations. In this work, simulations are performed for gust-airfoil interaction at different Mach numbers, using a high-fidelity direct Computational AeroAcoustic (CAA) approach based on a spectral/hp element method, verified by a CAA benchmark case. It is found that the squared sound pressure varies approximately as the 5th power of Mach number, which changes slightly with the observer location. This scaling law can give a better sound prediction than the flat-plate theory for thicker airfoils. Besides, another prediction method, based on the flat-plate theory and CAA simulation, has been proposed to give better predictions than the scaling law for thicker airfoils.Keywords: aeroacoustics, gust-airfoil interaction, CFD, CAA
Procedia PDF Downloads 7811227 Modeling Operating Theater Scheduling and Configuration: An Integrated Model in Health-Care Logistics
Authors: Sina Keyhanian, Abbas Ahmadi, Behrooz Karimi
Abstract:
We present a multi-objective binary programming model which considers surgical cases are scheduling among operating rooms and the configuration of surgical instruments in limited capacity hospital trays, simultaneously. Many mathematical models have been developed previously in the literature addressing different challenges in health-care logistics such as assigning operating rooms, leveling beds, etc. But what happens inside the operating rooms along with the inventory management of required instruments for various operations, and also their integration with surgical scheduling have been poorly discussed. Our model considers the minimization of movements between trays during a surgery which recalls the famous cell formation problem in group technology. This assumption can also provide a major potential contribution to robotic surgeries. The tray configuration problem which consumes surgical instruments requirement plan (SIRP) and sequence of surgical procedures based on required instruments (SIRO) is nested inside the bin packing problem. This modeling approach helps us understand that most of the same-output solutions will not be necessarily identical when it comes to the rearrangement of surgeries among rooms. A numerical example has been dealt with via a proposed nested simulated annealing (SA) optimization approach which provides insights about how various configurations inside a solution can alter the optimal condition.Keywords: health-care logistics, hospital tray configuration, off-line bin packing, simulated annealing optimization, surgical case scheduling
Procedia PDF Downloads 28211226 Preventative Programs for At-Risk Families of Child Maltreatment: Using Home Visiting and Intergenerational Relationships
Authors: Kristina Gordon
Abstract:
One in three children in the United States is a victim of a maltreatment investigation, and about one in nine children has a substantiated investigation. Home visiting is one of several preventative strategies rooted in an early childhood approach that fosters maternal, infant, and early childhood health, protection, and growth. In the United States, 88% of states report administering home visiting programs or state-designed models. The purpose of this study was to conduct a systematic review on home visiting programs in the United States focused on the prevention of child abuse and neglect. This systematic review included 17 articles which found that most of the studies reported optimistic results. Common across studies was program content related to (1) typical child development, (2) parenting education, and (3) child physical health. Although several factors common to home visiting and parenting interventions have been identified, no research has examined the common components of manualized home visiting programs to prevent child maltreatment. Child maltreatment can be addressed with home visiting programs with evidence-based components and cultural adaptations that increase prevention by assisting families in tackling the risk factors they face. An innovative approach to child maltreatment prevention is bringing together at-risk families with the aging community. This innovative approach was prompted due to existing home visitation programs only focusing on improving skillsets and providing temporary relationships. This innovative approach can provide the opportunity for families to build a relationship with an aging individual who can share their wisdom, skills, compassion, love, and guidance, to support families in their well-being and decrease child maltreatment occurrence. Families would be identified if they experience any of the risk factors, including parental substance abuse, parental mental illness, domestic violence, and poverty. Families would also be identified as at risk if they lack supportive relationships such as grandparents or relatives. Families would be referred by local agencies such as medical clinics, hospitals, schools, etc., that have interactions with families regularly. The aging community would be recruited at local housing communities and community centers. An aging individual would be identified by the elderly community when there is a need or interest in a relationship by or for the individual. Cultural considerations would be made when assessing for compatibility between the families and aging individuals. The pilot program will consist of a small group of participants to allow manageable results to evaluate the efficacy of the program. The pilot will include pre-and post-surveys to evaluate the impact of the program. From the results, data would be created to determine the efficacy as well as the sufficiency of the details of the pilot. The pilot would also be evaluated on whether families were referred to Child Protective Services during the pilot as it relates to the goal of decreasing child maltreatment. The ideal findings will display a decrease in child maltreatment and an increase in family well-being for participants.Keywords: child maltreatment, home visiting, neglect, preventative, abuse
Procedia PDF Downloads 11611225 Overview on Sustainable Coastal Protection Structures
Authors: Suresh Reddi, Mathew Leslie, Vishnu S. Das
Abstract:
Sustainable design is a prominent concept across all sectors of engineering and its importance is widely recognized within the Arabian Gulf region. Despite that sustainable or soft engineering options are not widely deployed in coastal engineering projects and a preference for utilizing ‘hard engineering’ solutions remain. The concept of soft engineering lies in “working together” with the nature to manage the coastline. This approach allows hard engineering options, such as breakwaters or sea walls, to be minimized or even eliminated altogether. Hard structures provide a firm barrier to wave energy or flooding, but in doing so they often have a significant impact on the natural processes of the coastline. This may affect the area locally or impact on neighboring zones. In addition, they often have a negative environmental impact and may create a sense of disconnect between the marine environment and local users. Soft engineering options, seek to protect the coastline by working in harmony with the natural process of sediment transport/budget. They often consider new habitat creation and creating usable spaces that will increase the sense of connection with nature. Often soft engineering options, where appropriately deployed can provide a low-maintenance, aesthetically valued, natural line of coastal protection. This paper deals with an overview of the following: The widely accepted soft engineering practices across the world; How this approach has been considered by Ramboll in some recent projects in Middle East and Asia; Challenges and barriers to use in using soft engineering options in the region; Way forward towards more widespread adoption.Keywords: coastline, hard engineering, low maintenance, soft engineering options
Procedia PDF Downloads 13811224 Valence and Arousal-Based Sentiment Analysis: A Comparative Study
Authors: Usama Shahid, Muhammad Zunnurain Hussain
Abstract:
This research paper presents a comprehensive analysis of a sentiment analysis approach that employs valence and arousal as its foundational pillars, in comparison to traditional techniques. Sentiment analysis is an indispensable task in natural language processing that involves the extraction of opinions and emotions from textual data. The valence and arousal dimensions, representing the intensity and positivity/negativity of emotions, respectively, enable the creation of four quadrants, each representing a specific emotional state. The study seeks to determine the impact of utilizing these quadrants to identify distinct emotional states on the accuracy and efficiency of sentiment analysis, in comparison to traditional techniques. The results reveal that the valence and arousal-based approach outperforms other approaches, particularly in identifying nuanced emotions that may be missed by conventional methods. The study's findings are crucial for applications such as social media monitoring and market research, where the accurate classification of emotions and opinions is paramount. Overall, this research highlights the potential of using valence and arousal as a framework for sentiment analysis and offers invaluable insights into the benefits of incorporating specific types of emotions into the analysis. These findings have significant implications for researchers and practitioners in the field of natural language processing, as they provide a basis for the development of more accurate and effective sentiment analysis tools.Keywords: sentiment analysis, valence and arousal, emotional states, natural language processing, machine learning, text analysis, sentiment classification, opinion mining
Procedia PDF Downloads 10111223 Developing Proof Demonstration Skills in Teaching Mathematics in the Secondary School
Authors: M. Rodionov, Z. Dedovets
Abstract:
The article describes the theoretical concept of teaching secondary school students proof demonstration skills in mathematics. It describes in detail different levels of mastery of the concept of proof-which correspond to Piaget’s idea of there being three distinct and progressively more complex stages in the development of human reflection. Lessons for each level contain a specific combination of the visual-figurative components and deductive reasoning. It is vital at the transition point between levels to carefully and rigorously recalibrate teaching to reflect the development of more complex reflective understanding. This can apply even within the same age range, since students will develop at different speeds and to different potential. The authors argue that this requires an aware and adaptive approach to lessons to reflect this complexity and variation. The authors also contend that effective teaching which enables students to properly understand the implementation of proof arguments must develop specific competences. These are: understanding of the importance of completeness and generality in making a valid argument; being task focused; having an internalised locus of control and being flexible in approach and evaluation. These criteria must be correlated with the systematic application of corresponding methodologies which are best likely to achieve success. The particular pedagogical decisions which are made to deliver this objective are illustrated by concrete examples from the existing secondary school mathematics courses. The proposed theoretical concept formed the basis of the development of methodological materials which have been tested in 47 secondary schools.Keywords: education, teaching of mathematics, proof, deductive reasoning, secondary school
Procedia PDF Downloads 24211222 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research
Authors: Edvard P. G. Bruun
Abstract:
One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research
Procedia PDF Downloads 23511221 The Role of Supply Chain Agility in Improving Manufacturing Resilience
Authors: Maryam Ziaee
Abstract:
This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.Keywords: agility, manufacturing, resilience, supply chain
Procedia PDF Downloads 9111220 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 12811219 Landscape Classification in North of Jordan by Integrated Approach of Remote Sensing and Geographic Information Systems
Authors: Taleb Odeh, Nizar Abu-Jaber, Nour Khries
Abstract:
The southern part of Wadi Al Yarmouk catchment area covers north of Jordan. It locates within latitudes 32° 20’ to 32° 45’N and longitudes 35° 42’ to 36° 23’ E and has an area of about 1426 km2. However, it has high relief topography where the elevation varies between 50 to 1100 meter above sea level. The variations in the topography causes different units of landforms, climatic zones, land covers and plant species. As a results of these different landscapes units exists in that region. Spatial planning is a major challenge in such a vital area for Jordan which could not be achieved without determining landscape units. However, an integrated approach of remote sensing and geographic information Systems (GIS) is an optimized tool to investigate and map landscape units of such a complicated area. Remote sensing has the capability to collect different land surface data, of large landscape areas, accurately and in different time periods. GIS has the ability of storage these land surface data, analyzing them spatially and present them in form of professional maps. We generated a geo-land surface data that include land cover, rock units, soil units, plant species and digital elevation model using ASTER image and Google Earth while analyzing geo-data spatially were done by ArcGIS 10.2 software. We found that there are twenty two different landscape units in the study area which they have to be considered for any spatial planning in order to avoid and environmental problems.Keywords: landscape, spatial planning, GIS, spatial analysis, remote sensing
Procedia PDF Downloads 52811218 A Comparative Analysis of the Factors Determining Improvement and Effectiveness of Mediation in Family Matters Regarding Child Protection in Australia and Poland
Authors: Beata Anna Bronowicka
Abstract:
Purpose The purpose of this paper is to improve effectiveness of mediation in family matters regarding child protection in Australia and Poland. Design/methodology/approach the methodological approach is phenomenology. Two phenomenological methods of data collection were used in this research 1/ a doctrinal research 2/an interview. The doctrinal research forms the basis for obtaining information on mediation, the date of introduction of this alternative dispute resolution method to the Australian and Polish legal systems. No less important were the analysis of the legislation and legal doctrine in the field of mediation in family matters, especially child protection. In the second method, the data was collected by semi-structured interview. The collected data was translated from Polish to English and analysed using software program. Findings- The rights of children in the context of mediation in Australia and Poland differ from the recommendations of the UN Committee on the Rights of the Child, which require that children be included in all matters that concern them. It is the room for improvement in the mediation process by increasing child rights in mediation between parents in matters related to children. Children should have the right to express their opinion similarly to the case in the court process. The challenge with mediation is also better understanding the role of professionals in mediation as lawyers, mediators. Originality/value-The research is anticipated to be of particular benefit to parents, society as whole, and professionals working in mediation. These results may also be helpful during further legislative initiatives in this area.Keywords: mediation, family law, children's rights, australian and polish family law
Procedia PDF Downloads 7811217 Lattice Boltzmann Simulation of Fluid Flow and Heat Transfer Through Porous Media by Means of Pore-Scale Approach: Effect of Obstacles Size and Arrangement on Tortuosity and Heat Transfer for a Porosity Degree
Authors: Annunziata D’Orazio, Arash Karimipour, Iman Moradi
Abstract:
The size and arrangement of the obstacles in the porous media has an influential effect on the fluid flow and heat transfer, even in the same porosity. Regarding to this, in the present study, several different amounts of obstacles, in both regular and stagger arrangements, in the analogous porosity have been simulated through a channel. In order to compare the effect of stagger and regular arrangements, as well as different quantity of obstacles in the same porosity, on fluid flow and heat transfer. In the present study, the Single Relaxation Time Lattice Boltzmann Method, with Bhatnagar-Gross-Ktook (BGK) approximation and D2Q9 model, is implemented for the numerical simulation. Also, the temperature field is modeled through a Double Distribution Function (DDF) approach. Results are presented in terms of velocity and temperature fields, streamlines, percentage of pressure drop and Nusselt number of the obstacles walls. Also, the correlation between tortuosity and Nusselt number of the obstacles walls, for both regular and staggered arrangements, has been proposed. On the other hand, the results illustrated that by increasing the amount of obstacles, as well as changing their arrangement from regular to staggered, in the same porosity, the rate of tortuosity and Nusselt number of the obstacles walls increased.Keywords: lattice boltzmann method, heat transfer, porous media, pore-scale, porosity, tortuosity
Procedia PDF Downloads 8711216 Intellectual Capital as Resource Based Business Strategy
Authors: Vidya Nimkar Tayade
Abstract:
Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.Keywords: innovation, protection, management, commercialization
Procedia PDF Downloads 16811215 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning
Authors: Thomas James Bell III
Abstract:
Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education
Procedia PDF Downloads 14211214 Effects of Nutrients Supply on Milk Yield, Composition and Enteric Methane Gas Emissions from Smallholder Dairy Farms in Rwanda
Authors: Jean De Dieu Ayabagabo, Paul A.Onjoro, Karubiu P. Migwi, Marie C. Dusingize
Abstract:
This study investigated the effects of feed on milk yield and quality through feed monitoring and quality assessment, and the consequent enteric methane gas emissions from smallholder dairy farms in drier areas of Rwanda, using the Tier II approach for four seasons in three zones, namely; Mayaga and peripheral Bugesera (MPB), Eastern Savanna and Central Bugesera (ESCB), and Eastern plateau (EP). The study was carried out using 186 dairy cows with a mean live weight of 292 Kg in three communal cowsheds. The milk quality analysis was carried out on 418 samples. Methane emission was estimated using prediction equations. Data collected were subjected to ANOVA. The dry matter intake was lower (p<0.05) in the long dry season (7.24 Kg), with the ESCB zone having the highest value of 9.10 Kg, explained by the practice of crop-livestock integration agriculture in that zone. The Dry matter digestibility varied between seasons and zones, ranging from 52.5 to 56.4% for seasons and from 51.9 to 57.5% for zones. The daily protein supply was higher (p<0.05) in the long rain season with 969 g. The mean daily milk production of lactating cows was 5.6 L with a lower value (p<0.05) during the long dry season (4.76 L), and the MPB zone having the lowest value of 4.65 L. The yearly milk production per cow was 1179 L. The milk fat varied from 3.79 to 5.49% with a seasonal and zone variation. No variation was observed with milk protein. The seasonal daily methane emission varied from 150 g for the long dry season to 174 g for the long rain season (p<0.05). The rain season had the highest methane emission as it is associated with high forage intake. The mean emission factor was 59.4 Kg of methane/year. The present EFs were higher than the default IPPC value of 41 Kg from developing countries in African, the Middle East, and other tropical regions livestock EFs using Tier I approach due to the higher live weight in the current study. The methane emission per unit of milk production was lower in the EP zone (46.8 g/L) due to the feed efficiency observed in that zone. Farmers should use high-quality feeds to increase the milk yield and reduce the methane gas produced per unit of milk. For an accurate assessment of the methane produced from dairy farms, there is a need for the use of the Life Cycle Assessment approach that considers all the sources of emissions.Keywords: footprint, forage, girinka, tier
Procedia PDF Downloads 20511213 A Scenario-Based Experiment Comparing Managerial and Front-Line Employee Apologies in Terms of Customers' Perceived Justice, Satisfaction, and Commitment
Authors: Ioana Dallinger, Vincent P. Magnini
Abstract:
Due to the many moving parts and high human component, mistakes and failures sometimes occur during transactions in service environments. Because a certain portion of such failures is unavoidable, many service providers constantly look for guidance regarding optimal ways by which they should manage failures and recoveries. Through the use of a scenario-based experiment, the findings of this study run counter to the empowerment approach (i.e. that frontline employees should be empowered to resolve failure situations on their own doing). Specifically, this study finds that customers’ perceptions of distributive, procedural, and interactional justice are significantly higher [p-values < .05] when a manager delivers an apology as opposed to the frontline provider. Moreover, customers’ satisfaction with the recovery and commitment to the firm are also significantly stronger [p-values < .05] when a manager apologizes. Interestingly, this study also empirically tests the effects of combined apologies of both the manager and employee and finds that the combined approach yields better results for customers’ interactional justice perceptions and for their satisfaction with recovery, but not for their distributive or procedural justice perceptions or consequent commitment to the firm. This study can serve a springboard for further research. For example, perceptions and attitudes regarding employee empowerment vary based upon country culture. Furthermore, there are likely a number of factors that can moderate the cause and effect relationship between a failure recovery and customers’ post-recovery perceptions [e.g. the severity of the failure].Keywords: apology, empowerment, service failure recovery, service recovery
Procedia PDF Downloads 29611212 Optimum Design of Hybrid (Metal-Composite) Mechanical Power Transmission System under Uncertainty by Convex Modelling
Authors: Sfiso Radebe
Abstract:
The design models dealing with flawless composite structures are in abundance, where the mechanical properties of composite structures are assumed to be known a priori. However, if the worst case scenario is assumed, where material defects combined with processing anomalies in composite structures are expected, a different solution is attained. Furthermore, if the system being designed combines in series hybrid elements, individually affected by material constant variations, it implies that a different approach needs to be taken. In the body of literature, there is a compendium of research that investigates different modes of failure affecting hybrid metal-composite structures. It covers areas pertaining to the failure of the hybrid joints, structural deformation, transverse displacement, the suppression of vibration and noise. In the present study a system employing a combination of two or more hybrid power transmitting elements will be explored for the least favourable dynamic loads as well as weight minimization, subject to uncertain material properties. Elastic constants are assumed to be uncertain-but-bounded quantities varying slightly around their nominal values where the solution is determined using convex models of uncertainty. Convex analysis of the problem leads to the computation of the least favourable solution and ultimately to a robust design. This approach contrasts with a deterministic analysis where the average values of elastic constants are employed in the calculations, neglecting the variations in the material properties.Keywords: convex modelling, hybrid, metal-composite, robust design
Procedia PDF Downloads 21111211 An Investigation on Physics Teachers’ Views Towards Context Based Learning Approach
Authors: Medine Baran, Abdulkadir Maskan, Mehmet Ikbal Yetişir, Mukadder Baran, Azmi Türkan, Şeyma Yaşar
Abstract:
The purpose of this study was to determine the views of physics teachers from several secondary schools in Turkey towards context-based learning approach. In the study, the context-based learning opinion questionnaire developed by the researchers for use as the data collection tool was piloted with 250 physics teachers. The questionnaire examined by the researchers and field experts was initially made up of 53 items. Following the evaluation process of the questionnaire, it included 37 items. In this way, the reliability and validity process of the measurement tool was completed. In the end, the finalized questionnaire was applied to 144 physics teachers from several secondary schools in different cities in Turkey (F:73, M:71). In the study, the participants were determined based on ease of reaching them. The results revealed no remarkable difference between the views of the physics teachers with respect to their gender, region and school. However, when the items in the questionnaire were considered, it was found that the participants interestingly agreed on some of the choices in the items. Depending on this, it was found that there were high levels of differences between the frequencies of those who agreed and those who disagreed with the 16 items in the questionnaire. Therefore, as the following phase of the present study, further research has been planned using the same questions. Based on these questions, which received opposite responses, physics teachers will be asked for their views about the results of the study using the interview technique, one of qualitative research techniques. In this way, the results will be evaluated both by the researchers and by the participants, and the problems and difficulties will be determined. As a result, related suggestions can be put forward.Keywords: context bases learning, physics teachers, views
Procedia PDF Downloads 37311210 Security Design of Root of Trust Based on RISC-V
Authors: Kang Huang, Wanting Zhou, Shiwei Yuan, Lei Li
Abstract:
Since information technology develops rapidly, the security issue has become an increasingly critical for computer system. In particular, as cloud computing and the Internet of Things (IoT) continue to gain widespread adoption, computer systems need to new security threats and attacks. The Root of Trust (RoT) is the foundation for providing basic trusted computing, which is used to verify the security and trustworthiness of other components. Design a reliable Root of Trust and guarantee its own security are essential for improving the overall security and credibility of computer systems. In this paper, we discuss the implementation of self-security technology based on the RISC-V Root of Trust at the hardware level. To effectively safeguard the security of the Root of Trust, researches on security safeguard technology on the Root of Trust have been studied. At first, a lightweight and secure boot framework is proposed as a secure mechanism. Secondly, two kinds of memory protection mechanism are built to against memory attacks. Moreover, hardware implementation of proposed method has been also investigated. A series of experiments and tests have been carried on to verify to effectiveness of the proposed method. The experimental results demonstrated that the proposed approach is effective in verifying the integrity of the Root of Trust’s own boot rom, user instructions, and data, ensuring authenticity and enabling the secure boot of the Root of Trust’s own system. Additionally, our approach provides memory protection against certain types of memory attacks, such as cache leaks and tampering, and ensures the security of root-of-trust sensitive information, including keys.Keywords: root of trust, secure boot, memory protection, hardware security
Procedia PDF Downloads 21611209 An Application of Integrated Multi-Objective Particles Swarm Optimization and Genetic Algorithm Metaheuristic through Fuzzy Logic for Optimization of Vehicle Routing Problems in Sugar Industry
Authors: Mukhtiar Singh, Sumeet Nagar
Abstract:
Vehicle routing problem (VRP) is a combinatorial optimization and nonlinear programming problem aiming to optimize decisions regarding given set of routes for a fleet of vehicles in order to provide cost-effective and efficient delivery of both services and goods to the intended customers. This paper proposes the application of integrated particle swarm optimization (PSO) and genetic optimization algorithm (GA) to address the Vehicle routing problem in sugarcane industry in India. Suger industry is very prominent agro-based industry in India due to its impacts on rural livelihood and estimated to be employing around 5 lakhs workers directly in sugar mills. Due to various inadequacies, inefficiencies and inappropriateness associated with the current vehicle routing model it costs huge money loss to the industry which needs to be addressed in proper context. The proposed algorithm utilizes the crossover operation that originally appears in genetic algorithm (GA) to improve its flexibility and manipulation more readily and avoid being trapped in local optimum, and simultaneously for improving the convergence speed of the algorithm, level set theory is also added to it. We employ the hybrid approach to an example of VRP and compare its result with those generated by PSO, GA, and parallel PSO algorithms. The experimental comparison results indicate that the performance of hybrid algorithm is superior to others, and it will become an effective approach for solving discrete combinatory problems.Keywords: fuzzy logic, genetic algorithm, particle swarm optimization, vehicle routing problem
Procedia PDF Downloads 39411208 Catalytic Hydrodesulfurization of Dibenzothiophene Coupled with Ionic Liquids over Low Pd Incorporated Co-Mo@Al₂O₃ and Ni-Mo@Al₂O₃ Catalysts at Mild Operating Conditions
Authors: Yaseen Muhammad, Zhenxia Zhao, Zhangfa Tong
Abstract:
A key problem with hydrodesulfurization (HDS) process of fuel oils is the application of severe operating conditions. In this study, we proposed the catalytic HDS of dibenzothiophene (DBT) integrated with ionic liquids (ILs) application at mild temperature and pressure over low loaded (0.5 wt.%) Pd promoted Co-Mo@Al₂O₃ and Ni-Mo@Al₂O₃ catalysts. Among the thirteen ILs tested, [BMIM]BF₄, [(CH₃)₄N]Cl, [EMIM]AlCl₄, and [(C₈H₁₇)(C₃H₇)₃P]Br enhanced the catalytic HDS efficiency while the latest ranked the top of activity list as confirmed by DFT studies as well. Experimental results revealed that Pd incorporation greatly enhanced the HDS activity of classical Co or Ni based catalysts. At mild optimized experimental conditions of 1 MPa H₂ pressure, 120 oC, IL:oil ratio of 1:3 and 4 h reaction time, the % DBT conversion (21 %) by Ni-Mo@Al₂O₃ was enhanced to 69 % (over Pd-Ni-Mo@ Al₂O₃) using [(C₈H₁₇) (C₃H₇)₃P]Br. The fresh and spent catalysts were characterized for textural properties using XPS, SEM, EDX, XRD and BET surface area techniques. An overall catalytic HDS activity followed the order of: Pd-Ni-Mo@Al₂O₃ > Pd-Co-Mo@Al₂O₃ > Ni-Mo@Al₂O₃ > Co-Mo@Al₂O₃. [(C₈H₁₇) (C₃H₇)₃P]Br.could be recycled four times with minimal decrease in HDS activity. Reaction products were analyzed by GC-MS which helped in proposing reaction mechanism for the IL coupled HDS process. The present approach attributed to its cost-effective nature, ease of operation with less mechanical requirements in terms of mild operating conditions, and high efficiency could be deemed as an alternative approach for the HDS of DBT on industrial level applications.Keywords: DFT simulation, GC-MS and reaction mechanism, Ionic liquid coupled HDS of DBT, low Pd loaded catalyst, mild operating condition
Procedia PDF Downloads 15311207 A Grey-Box Text Attack Framework Using Explainable AI
Authors: Esther Chiramal, Kelvin Soh Boon Kai
Abstract:
Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.Keywords: BERT, explainable AI, Grey-box text attack, transformer
Procedia PDF Downloads 137