Search results for: computer application
961 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 73960 A Method and System for Secure Authentication Using One Time QR Code
Authors: Divyans Mahansaria
Abstract:
User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.Keywords: authentication, QR code, cipher / decipher text, one time password, secret information
Procedia PDF Downloads 269959 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 176958 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290957 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231956 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 332955 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 158954 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound
Procedia PDF Downloads 366953 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 235952 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery
Procedia PDF Downloads 269951 Inertial Particle Focusing Dynamics in Trapezoid Straight Microchannels: Application to Continuous Particle Filtration
Authors: Reza Moloudi, Steve Oh, Charles Chun Yang, Majid Ebrahimi Warkiani, May Win Naing
Abstract:
Inertial microfluidics has emerged recently as a promising tool for high-throughput manipulation of particles and cells for a wide range of flow cytometric tasks including cell separation/filtration, cell counting, and mechanical phenotyping. Inertial focusing is profoundly reliant on the cross-sectional shape of the channel and its impacts not only on the shear field but also the wall-effect lift force near the wall region. Despite comprehensive experiments and numerical analysis of the lift forces for rectangular and non-rectangular microchannels (half-circular and triangular cross-section), which all possess planes of symmetry, less effort has been made on the 'flow field structure' of trapezoidal straight microchannels and its effects on inertial focusing. On the other hand, a rectilinear channel with trapezoidal cross-sections breaks down all planes of symmetry. In this study, particle focusing dynamics inside trapezoid straight microchannels was first studied systematically for a broad range of channel Re number (20 < Re < 800). The altered axial velocity profile and consequently new shear force arrangement led to a cross-laterally movement of equilibration toward the longer side wall when the rectangular straight channel was changed to a trapezoid; however, the main lateral focusing started to move backward toward the middle and the shorter side wall, depending on particle clogging ratio (K=a/Hmin, a is particle size), channel aspect ratio (AR=W/Hmin, W is channel width, and Hmin is smaller channel height), and slope of slanted wall, as the channel Reynolds number further increased (Re > 50). Increasing the channel aspect ratio (AR) from 2 to 4 and the slope of slanted wall up to Tan(α)≈0.4 (Tan(α)=(Hlonger-sidewall-Hshorter-sidewall)/W) enhanced the off-center lateral focusing position from the middle of channel cross-section, up to ~20 percent of the channel width. It was found that the focusing point was spoiled near the slanted wall due to the dissymmetry; it mainly focused near the bottom wall or fluctuated between the channel center and the bottom wall, depending on the slanted wall and Re (Re < 100, channel aspect ratio 4:1). Eventually, as a proof of principle, a trapezoidal straight microchannel along with a bifurcation was designed and utilized for continuous filtration of a broader range of particle clogging ratio (0.3 < K < 1) exiting through the longer wall outlet with ~99% efficiency (Re < 100) in comparison to the rectangular straight microchannels (W > H, 0.3 ≤ K < 0.5).Keywords: cell/particle sorting, filtration, inertial microfluidics, straight microchannel, trapezoid
Procedia PDF Downloads 228950 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 121949 The Impact of Universal Design for Learning Implementation on Teaching Practices for Students with Intellectual Disabilities in the Kingdom of Saudi Arabia
Authors: Adnan Alhazmi
Abstract:
Background: UDL can be understood as a framework that holds the potential to elaborate the alternatives and platforms for the students with intellectual disabilities within general education settings and aims at offering flexible pathways that can support all the students in gaining a mastering over the goals of learning. This system of learning addresses the problem of the variability of the learner by delineating the diverse ways in which the individuals can understand, conceive, express and deal with the information. Goal: The aim of the proposed research is to examine the impact of the implementation of UDL in teaching practices for the students with intellectual disabilities in Saudi Arabian schools. Method: This research has used a combination of quantitative and qualitative designs. Survey questionnaires were used to gather the data for under this analytical descriptive method. The application of the qualitative interpretive approach was applied with the help of the interview to gather a detailed understanding on the aim of the research. For this purpose, the semi-structured interviews were conducted. Thus, the primary data will be gathered with the help of survey and interview to examine the impact of universal design learning implementation on teaching practices for intellectually disabled students in Saudi Arabian schools. The survey was conducted to examine the prevailing teaching practices for the students with intellectual disabilities in Saudi Arabia and evaluate if the teaching experience influences the current practices or not. The surveys were distributed to 50 teachers who teach the students with intellectual disabilities. However, the interviews were conducted to explore barriers of implementing UDL in Saudi Arabia and provide suggested guideline for the implementation of UDL in Saudi Arabia. The interviews, therefore, were with 10 teachers teaching the same subject. Findings: A key findings highlighted in this study revealed that the UDL framework serves as a crucial guide for teachers within inclusive settings to undertake meaningful planning for the individuals with intellectual disabilities so that they are able to access, participate, and grow within the general education curriculum. Other findings of the study highlighted the need to prepare the educators and all faculty members to understand the purpose and need for inclusion, the UDL framework so that better information about academic and social expectations for individuals with intellectual disabilities can be delivered. Conclusion: On the basis of the preliminary study undertaken on the subject of research, it could be suggested that UDL can serve to be an effective support for undertaking a meaningful inclusion of students with intellectual disability (ID) in general educational settings. It holds the potential role of working as an institutional design framework that could be used for designing curriculum for students with intellectual disabilities.Keywords: intellectual disability, inclusion, universal design for learning, teaching practice
Procedia PDF Downloads 139948 Photoswitchable and Polar-Dependent Fluorescence of Diarylethenes
Authors: Sofia Lazareva, Artem Smolentsev
Abstract:
Fluorescent photochromic materials collect strong interest due to their possible application in organic photonics such as optical logic systems, optical memory, visualizing sensors, as well as characterization of polymers and biological systems. In photochromic fluorescence switching systems the emission of fluorophore is modulated between ‘on’ and ‘off’ via the photoisomerization of photochromic moieties resulting in effective resonance energy transfer (FRET). In current work, we have studied both photochromic and fluorescent properties of several diarylethenes. It was found that coloured forms of these compounds are not fluorescent because of the efficient intramolecular energy transfer. Spectral and photochromic parameters of investigated substances have been measured in five solvents having different polarity. Quantum yields of photochromic transformation A↔B ΦA→B and ΦB→A as well as B isomer extinction coefficients were determined by kinetic method. It was found that the photocyclization reaction quantum yield of all compounds decreases with the increase of solvent polarity. In addition, the solvent polarity is revealed to affect fluorescence significantly. Increasing of the solvent dielectric constant was found to result in a strong shift of emission band position from 450 nm (nhexane) to 550 nm (DMSO and ethanol) for all three compounds. Moreover, the emission intensive in polar solvents becomes weak and hardly detectable in n-hexane. The only one exception in the described dependence is abnormally low fluorescence quantum yield in ethanol presumably caused by the loss of electron-donating properties of nitrogen atom due to the protonation. An effect of the protonation was also confirmed by the addition of concentrated HCl in solution resulting in a complete disappearance of the fluorescent band. Excited state dynamics were investigated by ultrafast optical spectroscopy methods. Kinetic curves of excited states absorption and fluorescence decays were measured. Lifetimes of transient states were calculated from the data measured. The mechanism of ring opening reaction was found to be polarity dependent. Comparative analysis of kinetics measured in acetonitrile and hexane reveals differences in relaxation dynamics after the laser pulse. The most important fact is the presence of two decay processes in acetonitrile, whereas only one is present in hexane. This fact supports an assumption made on the basis of steady-state preliminary experiments that in polar solvents occur stabilization of TICT state. Thus, results achieved prove the hypothesis of two channel mechanism of energy relaxation of compounds studied.Keywords: diarylethenes, fluorescence switching, FRET, photochromism, TICT state
Procedia PDF Downloads 680947 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 55946 Development of Tutorial Courseware on Selected Topics in Mathematics, Science and the English Language
Authors: Alice D. Dioquino, Olivia N. Buzon, Emilio F. Aguinaldo, Ruel Avila, Erwin R. Callo, Cristy Ocampo, Malvin R. Tabajen, Marla C. Papango, Marilou M. Ubina, Josephine Tondo, Cromwell L. Valeriano
Abstract:
The main purpose of this study was to develop, evaluate and validate courseware on Selected Topics in Mathematics, Science, and the English Language. Specifically, it aimed to: 1. Identify the appropriate Instructional Systems Design (ISD) model in the development of the courseware material; 2. Assess the courseware material according to its: a. Content Characteristics; b. Instructional Characteristics; and c. Technical Characteristics 3. Find out if there is a significant difference in the performance of students before and after using the tutorial CAI. This research is developmental as well as a one group pretest-posttest design. The study had two phases. Phase I includes the needs analysis, writing of lessons and storyboard by the respective experts in each field. Phase II includes the digitization or the actual development of the courseware by the faculty of the ICT department. In this phase it adapted an instructional systems design (ISD) model which is the ADDIE model. ADDIE stands for Analysis, Design, Development, Implementation and Evaluation. Formative evaluation was conducted simultaneously with the different phases to detect and remedy any bugs in the courseware along the areas of content, instructional and technical characteristics. The expected output are the digitized lessons in Algebra, Biology, Chemistry, Physics and Communication Arts in English. Students and some IT experts validated the CAI material using the Evaluation Form by Wong & Wong. They validated the CAI materials as Highly Acceptable with an overall mean rating of 4.527and standard deviation of 0 which means that they were one in the ratings they have given the CAI materials. A mean gain was recorded and computing the t-test for dependent samples it showed that there were significant differences in the mean achievement of the students before and after the treatment (using CAI). The identified ISD model used in the development of the tutorial courseware was the ADDIE model. The quantitative analyses of data based on ratings given by the respondents’ shows that the tutorial courseware possess the characteristics and or qualities of a very good computer-based courseware. The ratings given by the different evaluators with regard to content, instructional, and technical aspects of the Tutorial Courseware are in conformity towards being excellent. Students performed better in mathematics, biology chemistry, physics and the English Communication Arts after they were exposed to the tutorial courseware.Keywords: CAI, tutorial courseware, Instructional Systems Design (ISD) Model, education
Procedia PDF Downloads 347945 Problems and Solutions in the Application of ICP-MS for Analysis of Trace Elements in Various Samples
Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Áron Soós, Xénia Vágó, Dávid Andrási
Abstract:
In agriculture for analysis of elements in different food and food raw materials, moreover environmental samples generally flame atomic absorption spectrometers (FAAS), graphite furnace atomic absorption spectrometers (GF-AAS), inductively coupled plasma optical emission spectrometers (ICP-OES) and inductively coupled plasma mass spectrometers (ICP-MS) are routinely applied. An inductively coupled plasma mass spectrometer (ICP-MS) is capable for analysis of 70-80 elements in multielemental mode, from 1-5 cm3 volume of a sample, moreover the detection limits of elements are in µg/kg-ng/kg (ppb-ppt) concentration range. All the analytical instruments have different physical and chemical interfering effects analysing the above types of samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays there is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better (smaller) detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium, arsenic, germanium, vanadium and chromium. To elaborate an analytical method for trace elements with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) Physical interferences; 2) Spectral interferences (elemental and molecular isobaric); 3) Effect of easily ionisable elements; 4) Memory interferences. Analysing food and food raw materials, moreover environmental samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food and food raw materials, moreover environmental samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of the applied elements. So finally we could find “opportunities” to decrease or eliminate the error of the analyses of applied elements (Cr, Co, Ni, Cu, Zn, Ge, As, Se, Mo, Cd, Sn, Sb, Te, Hg, Pb, Bi). To analyse these elements in the above samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of the above elements, which can be corrected using different internal standards.Keywords: elements, environmental and food samples, ICP-MS, interference effects
Procedia PDF Downloads 504944 Ballistic Performance of Magnesia Panels and Modular Wall Systems
Authors: Khin Thandar Soe, Mark Stephen Pulham
Abstract:
Ballistic building materials play a crucial role in ensuring the safety of the occupants within protective structures. Traditional options like Ordinary Portland Cement (OPC)-based walls, including reinforced concrete walls, precast concrete walls, masonry walls, and concrete blocks, are frequently employed for ballistic protection, but they have several drawbacks such as being thick, heavy, costly, and challenging to construct. On the other hand, glass and composite materials offer lightweight and easier construction alternatives, but they come with a high price tag. There has been no reported test data on magnesium-based ballistic wall panels or modular wall systems so far. This paper presents groundbreaking small arms test data related to the development of the world’s first magnesia cement ballistic wall panels and modular wall system. Non-hydraulic magnesia cement exhibits several superior properties, such as lighter weight, flexibility, acoustics, and fire performance, compared to the traditional Portland Cement. However, magnesia cement is hydrophilic and may degrade in prolonged contact with water. In this research, modified magnesia cement for water resistant and durability from UBIQ Technology is applied. The specimens are made of a modified magnesia cement formula and prepared in the Laboratory of UBIQ Technology Pty Ltd. The specimens vary in thickness, and the tests cover various small arms threats in compliance with standards AS/NZS2343 and UL752 and are performed up to the maximum threat level of Classification R2 (NATO) and UL-Level 8(NATO) by the Accredited Test Centre, BMT (Ballistic and Mechanical Testing, VIC, Australia). In addition, the results of the test conducted on the specimens subjected to the small 12mm diameter steel ball projectile impact generated by a gas gun are also presented and discussed in this paper. Gas gun tests were performed in UNSW@ADFA, Canberra, Australia. The tested results of the magnesia panels and wall systems are compared with one of concrete and other wall panels documented in the literature. The conclusion drawn is that magnesia panels and wall systems exhibit several advantages over traditional OPC-based wall systems, and they include being lighter, thinner, and easier to construct, all while providing equivalent protection against threats. This makes magnesia cement-based materials a compelling choice of application where efficiency and performance are critical to create a protective environment.Keywords: ballistics, small arms, gas gun, projectile, impact, wall panels, modular, magnesia cement
Procedia PDF Downloads 77943 The Enlightenment Project in the Arab World: Saudi Arabia as a Case Study in Modern Islamic Thought
Authors: Khawla Almulla
Abstract:
It is noticed that many Arab intellectuals have called to the need and the importance of enlightenment and its application in their communities, such as Saudi Arabia. To every Islamic state, the Kingdom of Saudi Arabia represents a strategic cornerstone, since it is considered the cradle of Islam. It is the Land of the Two Holy Mosques: the Holy Mosque in Makkah surrounding the Kaaba, towards which all Muslims around the world turn while performing daily prayers and even travel to if possible in order to perform the Hajj (Pilgrimage). It also has the Prophet'ـ‘s Holy Mosque in Al-Madinah Al-Munawarah, which contains the tomb of Prophet Muhammad (pbuh). Therefore, Saudi Arabia occupies an eminent position among Arab and Islamic countries on a religious level. Saudi Arabia has become the most influential country in the Arab world, since it has one-third of the oil resources outside Central Asia, China and Russia .It is the world’s largest producer and exporter of oil. Discovering oil in Saudi Arabia converted it from an important country for Muslims-only to an important country for the major industrial countries and also the developing countries, as well. For various reasons, the diversity of intellectual currents can play a significant role in each community by way of cultural improvement, the development of civilization and the education of people until they become accustomed to accepting or rejecting opinions or ideas which differ from or oppose their own. In addition, the intellectual pluralism and cultural diversity can play a variety of roles. This helps promote dialogue and understanding between different groups or schools of thought. It can also develop cognitive skills, by exchanging ideas and views between different schools and intellectual currents. However, in Saudi Arabia there is much to oppose this plurality. The situation today shows that having a variety of ideologies and differences of cultures are not considered a reasonable way to develop intellectually as an individual or as a country. Rather the opposite is recommended, such that the ideologies of different groups are enough to bring out intellectual conflict and then to the segregation of society. As a consequence, extremism of thought from the different currents in Saudi Arabia has become apparent. This research is of great importance in its exploration of two significant themes. First, it highlights the Saudi Arabian background, in particular the historical, religious and social contexts, in order to understand the background of each religious or liberal movement and find the core of the intellectual differences between them. In addition, the aim of this research is to show the importance of moderation in Islamic thought in Saudi Arabia by tracing the thoughts and views of Dr Salman Al-Odah, whom he has considered to be the most important moderate thinker in Saudi Arabia.Keywords: Saudi Arabia, intellectual movements, religious movements, extremism, moderation, Salafism, liberalism, Salman Al-Odah
Procedia PDF Downloads 291942 Co-Creation of Content with the Students in Entrepreneurship Education to Capture Entrepreneurship Phenomenon in an Innovative Way
Authors: Prema Basargekar
Abstract:
Facilitating the subject ‘Entrepreneurship Education’ in higher education, such as management studies, can be exhilarating as well as challenging. It is a multi-disciplinary and ever-evolving subject. Capturing entrepreneurship as a phenomenon in a holistic manner is a daunting task as it requires covering various dimensions such as new ideas generation, entrepreneurial traits, business opportunities scanning, the role of policymakers, value creation, etc., to name a few. Implicit entrepreneurship theory and effectuation are two different theories that focus on engaging the participants to create content by using their own experiences, perceptions, and belief systems. It helps in understanding the phenomenon holistically. The assumption here is that all of us are part of the entrepreneurial ecosystem, and effective learning can come through active engagement and peer learning by all the participants together. The present study is an attempt to use these theories in the class assignment given to the students at the beginning of the course to build the course content and understand entrepreneurship as a phenomenon in a better way through peer learning. The assignment was given to three batches of MBA post-graduate students doing the program in one of the private business schools in India. The subject of ‘Entrepreneurship Management’ is facilitated in the third trimester of the first year. At the beginning of the course, the students were given the assignment to submit a brief write-up/ collage/picture/poem or in any other format about “What entrepreneurship means to you?” They were asked to give their candid opinions about entrepreneurship as a phenomenon as they perceive it. Nearly 156 students doing post-graduate MBA submitted the assignment. These assignments were further used to find answers to two research questions. – 1) Are students able to use divergent and innovative forms to express their opinions, such as poetry, illustrations, videos, etc.? 2) What are various dimensions of entrepreneurship which are emerging to understand the phenomenon in a better way? The study uses the Brawn and Clark framework of reflective thematic analysis for qualitative analysis. The study finds that students responded to this assignment enthusiastically and expressed their thoughts in multiple ways, such as poetry, illustration, personal narrative, videos, etc. The content analysis revealed that there could be seven dimensions to looking at entrepreneurship as a phenomenon. They are 1) entrepreneurial traits, 2) entrepreneurship as a journey, 3) value creation by entrepreneurs in terms of economic and social value, 4) entrepreneurial role models, 5) new business ideas and innovations, 6) personal entrepreneurial experiences and aspirations, and 7) entrepreneurial ecosystem. The study concludes that an implicit approach to facilitate entrepreneurship education helps in understanding it as a live phenomenon. It also encourages students to apply divergent and convergent thinking. It also helps in triggering new business ideas or stimulating the entrepreneurial aspirations of the students. The significance of the study lies in the application of implicit theories in the classroom to make higher education more engaging and effective.Keywords: co-creation of content, divergent thinking, entrepreneurship education, implicit theory
Procedia PDF Downloads 75941 I, Me and the Bot: Forming a Theory of Symbolic Interactivity with a Chatbot
Authors: Felix Liedel
Abstract:
The rise of artificial intelligence has numerous and far-reaching consequences. In addition to the obvious consequences for entire professions, the increasing interaction with chatbots also has a wide range of social consequences and implications. We are already increasingly used to interacting with digital chatbots, be it in virtual consulting situations, creative development processes or even in building personal or intimate virtual relationships. A media-theoretical classification of these phenomena has so far been difficult, partly because the interactive element in the exchange with artificial intelligence has undeniable similarities to human-to-human communication but is not identical to it. The proposed study, therefore, aims to reformulate the concept of symbolic interaction in the tradition of George Herbert Mead as symbolic interactivity in communication with chatbots. In particular, Mead's socio-psychological considerations will be brought into dialog with the specific conditions of digital media, the special dispositive situation of chatbots and the characteristics of artificial intelligence. One example that illustrates this particular communication situation with chatbots is so-called consensus fiction: In face-to-face communication, we use symbols on the assumption that they will be interpreted in the same or a similar way by the other person. When briefing a chatbot, it quickly becomes clear that this is by no means the case: only the bot's response shows whether the initial request corresponds to the sender's actual intention. This makes it clear that chatbots do not just respond to requests. Rather, they function equally as projection surfaces for their communication partners but also as distillations of generalized social attitudes. The personalities of the chatbot avatars result, on the one hand, from the way we behave towards them and, on the other, from the content we have learned in advance. Similarly, we interpret the response behavior of the chatbots and make it the subject of our own actions with them. In conversation with the virtual chatbot, we enter into a dialog with ourselves but also with the content that the chatbot has previously learned. In our exchanges with chatbots, we, therefore, interpret socially influenced signs and behave towards them in an individual way according to the conditions that the medium deems acceptable. This leads to the emergence of situationally determined digital identities that are in exchange with the real self but are not identical to it: In conversation with digital chatbots, we bring our own impulses, which are brought into permanent negotiation with a generalized social attitude by the chatbot. This also leads to numerous media-ethical follow-up questions. The proposed approach is a continuation of my dissertation on moral decision-making in so-called interactive films. In this dissertation, I attempted to develop a concept of symbolic interactivity based on Mead. Current developments in artificial intelligence are now opening up new areas of application.Keywords: artificial intelligence, chatbot, media theory, symbolic interactivity
Procedia PDF Downloads 56940 The Opinions of Counselor Candidates' regarding Universal Values in Marriage Relationship
Authors: Seval Kizildag, Ozge Can Aran
Abstract:
The effective intervention of counselors’ in conflict between spouses may be effective in increasing the quality of marital relationship. At this point, it is necessary for counselors to consider their own value systems at first and then reflect this correctly to the counseling process. For this reason, it is primarily important to determine the needs of counselors. Starting from this point of view, in this study, it is aimed to reveal the perspective of counselor candidates about the universal values in marriage relation. The study group of the survey was formed by sampling, which is one of the prospective sampling methods. As a criterion being a candidate for counseling area and having knowledge of the concepts of the Marriage and Family Counseling course is based, because, that candidate students have a comprehensive knowledge of the field and that students have mastered the concepts of marriage and family counseling will strengthen the findings of this study. For this reason, 61 counselor candidates, 32 (52%) female and 29 (48%) male counselor candidates, who were about to graduate from a university in south-east Turkey and who took a Marriage and Family Counseling course, voluntarily participated in the study. The average age of counselor candidates’ is 23. At the same time, 70 % of the parents of these candidates brought about their marriage through arranged marriage, 13% through flirting, 8% by relative marriage, 7% through friend circles and 2% by custom. The data were collected through Demographic Information Form and a form titled ‘Universal Values Form in Marriage’ which consists of six questions prepared by researchers. After the data were transferred to the computer, necessary statistical evaluations were made on the data. The qualitative data analysis was used on the data which was obtained in the study. The universal values which include six basic values covering trustworthiness, respect, responsibility, fairness, caring, citizenship, determined under the name as ‘six pillar of character’ are used as base and frequency values of the data were calculated trough content analysis. According to the findings of the study, while the value which most students find the most important value in marriage relation is being reliable, the value which they find the least important is to have citizenship consciousness. Also in this study, it is found out that counselor candidates associate the value of being trustworthiness ‘loyalty’ with (33%) as the highest in terms of frequency, the value of being respect ‘No violence’ with (23%), the value of responsibility ‘in the context of gender roles and spouses doing their owns’ with (35%) the value of being fairness ‘impartiality’ with (25%), the value of being caring ‘ being helpful’ with (25%) and finally as to the value of citizenship ‘love of country’ with (14%) and’ respect for the laws ‘ with (14%). It is believed that these results of the study will contribute to the arrangements for the development of counseling skills for counselor candidates regarding value in marriage and family counseling curricula.Keywords: caring, citizenship, counselor candidate, fairness, marriage relationship, respect, responsibility, trustworthiness, value system
Procedia PDF Downloads 273939 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 354938 Application of the Sufficiency Economy Philosophy to Integrated Instructional Model of In-Service Teachers of Schools under the Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn, Nakhonnayok Educational Service Area Office
Authors: Kathaleeya Chanda
Abstract:
The schools under the Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn in Nakhonnayok Educational Service Area Office are the small schools, situated in a remote and undeveloped area.Thus, the school-age youth didn’t have or have fewer opportunities to study at the higher education level which can lead to many social and economic problems. This study aims to solve these educational issues of the schools, under The Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn, Nakhonnayok Educational Service Area Office, by the development of teachers, so that teachers could develop teaching and learning system with the ultimate goal to increase students’ academic achievement, increase the educational opportunities for the youth in the area, and help them learn happily. 154 in-service teachers from 22 schools and 4 different districts in Nakhonnayok participated in this teacher training. Most teachers were satisfied with the training content and the trainer. Thereafter, the teachers were given the test to assess the skills and knowledge after training. Most of the teachers earned a score higher than 75%. Accordingly, it can be concluded that after attending the training, teachers have a clear understanding of the contents. After the training session, the teachers have to write a lesson plan that is integrated or adapted to the Sufficiency Economy Philosophy. The teachers can either adopt intradisciplinary or interdisciplinary integration according to their actual teaching conditions in the school. Two weeks after training session, the researchers went to the schools to discuss with the teachers and follow up the assigned integrated lesson plan. It was revealed that the progress of integrated lesson plan could be divided into 3 groups: 1) the teachers who have completed the integrated lesson plan, but are concerned about the accuracy and consistency, 2) teachers who almost complete the lesson plan or made a great progress but are still concerned, confused in some aspects and not fill in the details of the plan, and 3), the teachers who made few progress, are uncertain and confused in many aspects, and may had overloaded tasks from their school. However, a follow-up procedure led to the commitment of teachers to complete the lesson plan. Regarding student learning assessment, from an experiment teaching, most of the students earned a score higher than 50 %. The rate is higher than the one from actual teaching. In addition, the teacher have assessed that the student is happy, enjoys learning, and providing a good cooperates in teaching activities. The students’ interview about the new lesson plan shows that they are happy with it, willing to learn, and able to apply such knowledge in daily life. Integrated lesson plan can increases the educational opportunities for youth in the area.Keywords: sufficiency, economy, philosophy, integrated education syllabus
Procedia PDF Downloads 188937 Reflections of Narrative Architecture in Transformational Representations on the Architectural Design Studio
Authors: M. Mortas, H. Asar, P. Dursun Cebi
Abstract:
The visionary works of architectural representation in the 21st century's present situation, are practiced through the methodologies which try to expose the intellectual and theoretical essences of futurologist positions that are revealed with this era's interactions. Expansions of conceptual and contextual inputs related to one architectural design representation, depend on its deepness of critical attitudes, its interactions with the concepts such as experience, meaning, affection, psychology, perception and aura, as well as its communication with spatial, cultural and environmental factors. The purpose of this research study is to be able to offer methodological application areas for the design dimensions of experiential practices into architectural design studios, by focusing on the architectural representative narrations of 'transformation,' 'metamorphosis,' 'morphogenesis,' 'in-betweenness', 'superposition' and 'intertwine’ in which they affect and are affected by the today’s spatiotemporal hybridizations of architecture. The narrative representations and the visual theory paradigms of the designers are chosen under the main title of 'transformation' for the investigation of these visionary and critical representations' dismantlings and decodings. Case studies of this research area are chosen from Neil Spiller, Bryan Cantley, Perry Kulper and Dan Slavinsky’s transformative, morphogenetic representations. The theoretical dismantlings and decodings which are obtained from these artists’ contemporary architectural representations are tried to utilize and practice in the structural design studios as alternative methodologies when to approach architectural design processes, for enriching, differentiating, diversifying and 'transforming' the applications of so far used design process precedents. The research aims to indicate architectural students about how they can reproduce, rethink and reimagine their own representative lexicons and so languages of their architectural imaginations, regarding the newly perceived tectonics of prosthetic, biotechnology, synchronicity, nanotechnology or machinery into various experiential design workshops. The methodology of this work can be thought as revealing the technical and theoretical tools, lexicons and meanings of contemporary-visionary architectural representations of our decade, with the essential contents and components of hermeneutics, etymology, existentialism, post-humanism, phenomenology and avant-gardism disciplines to re-give meanings the architectural visual theorists’ transformative representations of our decade. The value of this study may be to emerge the superposed and overlapped atmospheres of futurologist architectural representations for the students who need to rethink on the transcultural, deterritorialized and post-humanist critical theories to create and use the representative visual lexicons of themselves for their architectural soft machines and beings by criticizing the now, to be imaginative for the future of architecture.Keywords: architectural design studio, visionary lexicon, narrative architecture, transformative representation
Procedia PDF Downloads 142936 Creating Standards to Define the Role of Employment Specialists: A Case Study
Authors: Joseph Ippolito, David Megenhardt
Abstract:
In the United States, displaced workers, the unemployed and those seeking to build additional work skills are provided employment training and job placement services through a system of One-Stop Career Centers that are sponsored by the country’s 593 local Workforce Boards. During the period 2010-2015, these centers served roughly 8 million individuals each year. The quality of services provided at these centers rests upon professional employment specialists who work closely with clients to identify their job interests, to connect them to appropriate training opportunities, to match them with needed supportive social services and to guide them to eventual employment. Despite the crucial role these Employment Specialists play, currently there are no broadly accepted standards that establish what these individuals are expected to do in the workplace, nor are there indicators to assess how well an individual performs these responsibilities. Education Development Center (EDC) and the United Labor Agency (ULA) have partnered to create a foundation upon which curriculum can be developed that addresses the skills, knowledge and behaviors that Employment Specialists must master in order to serve their clients effectively. EDC is a non-profit, education research and development organization that designs, implements, and evaluates programs to improve education, health and economic opportunity worldwide. ULA is the social action arm of organized labor in Greater Cleveland, Ohio. ULA currently operates One-Stop Career Centers in both Cleveland and Pittsburgh, Pennsylvania. This case study outlines efforts taken to create standards that define the work of Employment Specialists and to establish indicators that can guide assessment of work performance. The methodology involved in the study has engaged a panel of expert Employment Specialists in rigorous, structured dialogues that analyze and identify the characteristics that enable them to be effective in their jobs. It has also drawn upon and integrated reviews of the panel’s work by more than 100 other Employment Specialists across the country. The results of this process are two documents that provide resources for developing training curriculum for future Employment Specialists, namely: an occupational profile of an Employment Specialist that offers a detailed articulation of the skills, knowledge and behaviors that enable individuals to be successful at this job, and; a collection of performance based indicators, aligned to the profile, which illustrate what the work responsibilities of an Employment Specialist 'look like' a four levels of effectiveness ranging from novice to expert. The method of occupational analysis used by the study has application across a broad number of fields.Keywords: assessment, employability, job standards, workforce development
Procedia PDF Downloads 236935 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves
Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann
Abstract:
Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves
Procedia PDF Downloads 48934 Raman Spectroscopy Analysis of MnTiO₃-TiO₂ Eutectic
Authors: Adrian Niewiadomski, Barbara Surma, Katarzyna Kolodziejak, Dorota A. Pawlak
Abstract:
Oxide-oxide eutectic is attracting increasing interest of scientific community because of their unique properties and numerous potential applications. Some of the most interesting examples of applications are metamaterials, glucose sensors, photoactive materials, thermoelectric materials, and photocatalysts. Their unique properties result from the fact that composite materials consist of two or more phases. As a result, these materials have additive and product properties. Additive properties originate from particular phases while product properties originate from the interaction between phases. MnTiO3-TiO2 eutectic is one of such materials. TiO2 is a well-known semiconductor, and it is used as a photocatalyst. Moreover, it may be used to produce solar cells, in a gas sensing devices and in electrochemistry. MnTiO3 is a semiconductor and antiferromagnetic. Therefore it has potential application in integrated circuits devices, and as a gas and humidity sensor, in non-linear optics and as a visible-light activated photocatalyst. The above facts indicate that eutectic MnTiO3-TiO2 constitutes an extremely promising material that should be studied. Despite that Raman spectroscopy is a powerful method to characterize materials, to our knowledge Raman studies of eutectics are very limited, and there are no studies of the MnTiO3-TiO2 eutectic. While to our knowledge the papers regarding this material are scarce. The MnTiO3-TiO2 eutectic, as well as TiO2 and MnTiO3 single crystals, were grown by the micro-pulling-down method at the Institute of Electronic Materials Technology in Warsaw, Poland. A nitrogen atmosphere was maintained during whole crystal growth process. The as-grown samples of MnTiO3-TiO2 eutectic, as well as TiO2 and MnTiO3 single crystals, are black and opaque. Samples were cut perpendicular to the growth direction. Cross sections were examined with scanning electron microscopy (SEM) and with Raman spectroscopy. The present studies showed that maintaining nitrogen atmosphere during crystal growth process may result in obtaining black TiO2 crystals. SEM and Raman experiments showed that studied eutectic consists of three distinct regions. Furthermore, two of these regions correspond with MnTiO3, while the third region corresponds with the TiO2-xNx phase. Raman studies pointed out that TiO2-xNx phase crystallizes in rutile structure. The studies show that Raman experiments may be successfully used to characterize eutectic materials. The MnTiO3-TiO2 eutectic was grown by the micro-pulling-down method. SEM and micro-Raman experiments were used to establish phase composition of studied eutectic. The studies revealed that the TiO2 phase had been doped with nitrogen. Therefore the TiO2 phase is, in fact, a solid solution with TiO2-xNx composition. The remaining two phases exhibit Raman lines of both rutile TiO2 and MnTiO3. This points out to some kind of coexistence of these phases in studied eutectic.Keywords: compound materials, eutectic growth and characterization, Raman spectroscopy, rutile TiO₂
Procedia PDF Downloads 195933 Optimization of Mechanical Properties of Alginate Hydrogel for 3D Bio-Printing Self-Standing Scaffold Architecture for Tissue Engineering Applications
Authors: Ibtisam A. Abbas Al-Darkazly
Abstract:
In this study, the mechanical properties of alginate hydrogel material for self-standing 3D scaffold architecture with proper shape fidelity are investigated. In-lab built 3D bio-printer extrusion-based technology is utilized to fabricate 3D alginate scaffold constructs. The pressure, needle speed and stage speed are varied using a computer-controlled system. The experimental result indicates that the concentration of alginate solution, calcium chloride (CaCl2) cross-linking concentration and cross-linking ratios lead to the formation of alginate hydrogel with various gelation states. Besides, the gelling conditions, such as cross-linking reaction time and temperature also have a significant effect on the mechanical properties of alginate hydrogel. Various experimental tests such as the material gelation, the material spreading and the printability test for filament collapse as well as the swelling test were conducted to evaluate the fabricated 3D scaffold constructs. The result indicates that the fabricated 3D scaffold from composition of 3.5% wt alginate solution, that is prepared in DI water and 1% wt CaCl2 solution with cross-linking ratios of 7:3 show good printability and sustain good shape fidelity for more than 20 days, compared to alginate hydrogel that is prepared in a phosphate buffered saline (PBS). The fabricated self-standing 3D scaffold constructs measured 30 mm × 30 mm and consisted of 4 layers (n = 4) show good pore geometry and clear grid structure after printing. In addition, the percentage change of swelling degree exhibits high swelling capability with respect to time. The swelling test shows that the geometry of 3D alginate-scaffold construct and of the macro-pore are rarely changed, which indicates the capability of holding the shape fidelity during the incubation period. This study demonstrated that the mechanical and physical properties of alginate hydrogel could be tuned for a 3D bio-printing extrusion-based system to fabricate self-standing 3D scaffold soft structures. This 3D bioengineered scaffold provides a natural microenvironment present in the extracellular matrix of the tissue, which could be seeded with the biological cells to generate the desired 3D live tissue model for in vitro and in vivo tissue engineering applications.Keywords: biomaterial, calcium chloride, 3D bio-printing, extrusion, scaffold, sodium alginate, tissue engineering
Procedia PDF Downloads 113932 Semiconductor Properties of Natural Phosphate Application to Photodegradation of Basic Dyes in Single and Binary Systems
Authors: Y. Roumila, D. Meziani, R. Bagtache, K. Abdmeziem, M. Trari
Abstract:
Heterogeneous photocatalysis over semiconductors has proved its effectiveness in the treatment of wastewaters since it works under soft conditions. It has emerged as a promising technique, giving rise to less toxic effluents and offering the opportunity of using sunlight as a sustainable and renewable source of energy. Many compounds have been used as photocatalysts. Though synthesized ones are intensively used, they remain expensive, and their synthesis involves special conditions. We thus thought of implementing a natural material, a phosphate ore, due to its low cost and great availability. Our work is devoted to the removal of hazardous organic pollutants, which cause several environmental problems and health risks. Among them, dye pollutants occupy a large place. This work relates to the study of the photodegradation of methyl violet (MV) and rhodamine B (RhB), in single and binary systems, under UV light and sunlight irradiation. Methyl violet is a triarylmethane dye, while RhB is a heteropolyaromatic dye belonging to the Xanthene family. In the first part of this work, the natural compound was characterized using several physicochemical and photo-electrochemical (PEC) techniques: X-Ray diffraction, chemical, and thermal analyses scanning electron microscopy, UV-Vis diffuse reflectance measurements, and FTIR spectroscopy. The electrochemical and photoelectrochemical studies were performed with a Voltalab PGZ 301 potentiostat/galvanostat at room temperature. The structure of the phosphate material was well characterized. The photo-electrochemical (PEC) properties are crucial for drawing the energy band diagram, in order to suggest the formation of radicals and the reactions involved in the dyes photo-oxidation mechanism. The PEC characterization of the natural phosphate was investigated in neutral solution (Na₂SO₄, 0.5 M). The study revealed the semiconducting behavior of the phosphate rock. Indeed, the thermal evolution of the electrical conductivity was well fitted by an exponential type law, and the electrical conductivity increases with raising the temperature. The Mott–Schottky plot and current-potential J(V) curves recorded in the dark and under illumination clearly indicate n-type behavior. From the results of photocatalysis, in single solutions, the changes in MV and RhB absorbance in the function of time show that practically all of the MV was removed after 240 mn irradiation. For RhB, the complete degradation was achieved after 330 mn. This is due to its complex and resistant structure. In binary systems, it is only after 120 mn that RhB begins to be slowly removed, while about 60% of MV is already degraded. Once nearly all of the content of MV in the solution has disappeared (after about 250 mn), the remaining RhB is degraded rapidly. This behaviour is different from that observed in single solutions where both dyes are degraded since the first minutes of irradiation.Keywords: environment, organic pollutant, phosphate ore, photodegradation
Procedia PDF Downloads 132