Search results for: optimal digital signal processing
707 Effects of Mild Heat Treatment on the Physical and Microbial Quality of Salak Apricot Cultivar
Authors: Bengi Hakguder Taze, Sevcan Unluturk
Abstract:
Şalak apricot (Prunus armeniaca L., cv. Şalak) is a specific variety grown in Igdir, Turkey. The fruit has distinctive properties distinguish it from other cultivars, such as its unique size, color, taste and higher water content. Drying is the widely used method for preservation of apricots. However, fresh consumption is preferred for Şalak apricot instead of drying due to its low dry matter content. Higher amounts of water in the structure and climacteric nature make the fruit sensitive against rapid quality loss during storage. Hence, alternative processing methods need to be introduced to extend the shelf life of the fresh produce. Mild heat (MH) treatment is of great interest as it can reduce the microbial load and inhibit enzymatic activities. Therefore, the aim of this study was to evaluate the impact of mild heat treatment on the natural microflora found on Şalak apricot surfaces and some physical quality parameters of the fruit, such as color and firmness. For this purpose, apricot samples were treated at different temperatures between 40 and 60 ℃ for different periods ranging between 10 to 60 min using a temperature controlled water bath. Natural flora on the fruit surfaces was examined using standard plating technique both before and after the treatment. Moreover, any changes in color and firmness of the fruit samples were also monitored. It was found that control samples were initially containing 7.5 ± 0.32 log CFU/g of total aerobic plate count (TAPC), 5.8±0.31 log CFU/g of yeast and mold count (YMC), and 5.17 ± 0.22 log CFU/g of coliforms. The highest log reductions in TAPC and YMC were observed as 3.87-log and 5.8-log after the treatments at 60 ℃ and 50 ℃, respectively. Nevertheless, the fruit lost its characteristic aroma at temperatures above 50 ℃. Furthermore, great color changes (ΔE ˃ 6) were observed and firmness of the apricot samples was reduced at these conditions. On the other hand, MH treatment at 41 ℃ for 10 min resulted in 1.6-log and 0.91-log reductions in TAPC and YMC, respectively, with slightly noticeable changes in color (ΔE ˂ 3). In conclusion, application of temperatures higher than 50 ℃ caused undesirable changes in physical quality of Şalak apricots. Although higher microbial reductions were achieved at those temperatures, temperatures between 40 and 50°C should be further investigated considering the fruit quality parameters. Another strategy may be the use of high temperatures for short time periods not exceeding 1-5 min. Besides all, MH treatment with UV-C light irradiation can be also considered as a hurdle strategy for better inactivation results.Keywords: color, firmness, mild heat, natural flora, physical quality, şalak apricot
Procedia PDF Downloads 137706 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City
Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja
Abstract:
This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.Keywords: NIHL, noise, sound level meter, tinnitus
Procedia PDF Downloads 204705 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries
Authors: Behzad Babaei, B. Gangadhara Prusty
Abstract:
The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress
Procedia PDF Downloads 102704 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits
Authors: Bryan Low
Abstract:
In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling
Procedia PDF Downloads 83703 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 141702 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 107701 Compositional Influence in the Photovoltaic Properties of Dual Ion Beam Sputtered Cu₂ZnSn(S,Se)₄ Thin Films
Authors: Brajendra S. Sengar, Vivek Garg, Gaurav Siddharth, Nisheka Anadkat, Amitesh Kumar, Shaibal Mukherjee
Abstract:
The optimal band gap (~ 1 to 1.5 eV) and high absorption coefficient ~104 cm⁻¹ has made Cu₂ZnSn(S,Se)₄ (CZTSSe) films as one of the most promising absorber materials in thin-film photovoltaics. Additionally, CZTSSe consists of elements that are abundant and non-toxic, makes it even more favourable. The CZTSSe thin films are grown at 100 to 500ᵒC substrate temperature (Tsub) on Soda lime glass (SLG) substrate by Elettrorava dual ion beam sputtering (DIBS) system by utilizing a target at 2.43x10⁻⁴ mbar working pressure with RF power of 45 W in argon ambient. The chemical composition, depth profiling, structural properties and optical properties of these CZTSSe thin films prepared on SLG were examined by energy dispersive X-ray spectroscopy (EDX, Oxford Instruments), Hiden secondary ion mass spectroscopy (SIMS) workstation with oxygen ion gun of energy up to 5 keV, X-ray diffraction (XRD) (Rigaku Cu Kα radiation, λ=.154nm) and Spectroscopic Ellipsometry (SE, M-2000D from J. A. Woollam Co., Inc). It is observed that from that, the thin films deposited at Tsub=200 and 300°C show Cu-poor and Zn-rich states (i.e., Cu/(Zn + Sn) < 1 and Zn/Sn > 1), which is not the case for films grown at other Tsub. It has been reported that the CZTSSe thin films with the highest efficiency are typically at Cu-poor and Zn-rich states. The values of band gap in the fundamental absorption region of CZTSSe are found to be in the range of 1.23-1.70 eV depending upon the Cu/(Zn+Sn) ratio. It is also observed that there is a decline in optical band gap with the increase in Cu/(Zn+Sn) ratio (evaluated from EDX measurement). Cu-poor films are found to have higher optical band gap than Cu-rich films. The decrease in the band gap with the increase in Cu content in case of CZTSSe films may be attributed to changes in the extent of p-d hybridization between Cu d-levels and (S, Se) p-levels. CZTSSe thin films with Cu/(Zn+Sn) ratio in the range 0.86–1.5 have been successfully deposited using DIBS. Optical band gap of the films is found to vary from 1.23 to 1.70 eV based on Cu/(Zn+Sn) ratio. CZTSe films with Cu/ (Zn+Sn) ratio of .86 are found to have optical band gap close to the ideal band gap (1.49 eV) for highest theoretical conversion efficiency. Thus by tailoring the value of Cu/(Zn+Sn), CZTSSe thin films with the desired band gap could be obtained. Acknowledgment: We are thankful to DIBS, EDX, and XRD facility equipped at Sophisticated Instrument Centre (SIC) at IIT Indore. The authors B. S. S and A. K. acknowledge CSIR, and V. G. acknowledges UGC, India for their fellowships. B. S. S is thankful to DST and IUSSTF for BASE Internship Award. Prof. Shaibal Mukherjee is thankful to DST and IUSSTF for BASE Fellowship and MEITY YFRF award. This work is partially supported by DAE BRNS, DST CERI, and DST-RFBR Project under India-Russia Programme of Cooperation in Science and Technology. We are thankful to Mukul Gupta for SIMS facility equipped at UGC-DAE Indore.Keywords: CZTSSe, DIBS, EDX, solar cell
Procedia PDF Downloads 250700 Robotic Solution for Nuclear Facility Safety and Monitoring System
Authors: Altab Hossain, Shakerul Islam, Golamur R. Khan, Abu Zafar M. Salahuddin
Abstract:
An effective identification of breakdowns is of premier importance for the safe and reliable operation of Nuclear Power Plants (NPP) and its associated facilities. A great number of monitoring and diagnosis methodologies are applied and used worldwide in areas such as industry, automobiles, hospitals, and power plant to detect and reduce human disasters. The potential consequences of several hazardous activities may harm the society using nuclear and its associated facilities. Hence, one of the most popular and effective methods to ensure safety and monitor the entire nuclear facility and imply risk-free operation without human interference during the hazardous situation is using a robot. Therefore, in this study, an advanced autonomous robot has been designed and developed that can monitor several parameters in the NPP to ensure the safety and do some risky job in case of nuclear disaster. The robot consisted of autonomous track following unit, data processing and transmitting unit can follow a straight line and take turn as the bank greater than 90 degrees. The developed robot can analyze various parameters such as temperature, altitude, radiation, obstacle, humidity, detecting fire, measuring distance, ultrasonic scan and taking the heat of any particular object. It has an ability to broadcast live stream and can record the document to its own server memory. There is a separate control unit constructed with a baseboard which processes the recorded data and a transmitter which transmits the processed data. To make the robot user-friendly, the code is developed such a way that a user can control any of robotic arm as per types of work. To control at any place and without the track, there is an advanced code has been developed to take manual overwrite. Through this process, administrator who has logged in permission to Dynamic Host Client Protocol (DHCP) can make the handover of the control of the robot. In this process, this robot is provided maximum nuclear security from being hacked. Not only NPP, this robot can be used to maximize the real-time monitoring system of any nuclear facility as well as nuclear material transportation and decomposition system.Keywords: nuclear power plant, radiation, dynamic host client protocol, nuclear security
Procedia PDF Downloads 209699 Numerical Investigation of Combustion Chamber Geometry on Combustion Performance and Pollutant Emissions in an Ammonia-Diesel Common Rail Dual-Fuel Engine
Authors: Youcef Sehili, Khaled Loubar, Lyes Tarabet, Mahfoudh Cerdoun, Clement Lacroix
Abstract:
As emissions regulations grow more stringent and traditional fuel sources become increasingly scarce, incorporating carbon-free fuels in the transportation sector emerges as a key strategy for mitigating the impact of greenhouse gas emissions. While the utilization of hydrogen (H2) presents significant technological challenges, as evident in the engine limitation known as knocking, ammonia (NH3) provides a viable alternative that overcomes this obstacle and offers convenient transportation, storage, and distribution. Moreover, the implementation of a dual-fuel engine using ammonia as the primary gas is promising, delivering both ecological and economic benefits. However, when employing this combustion mode, the substitution of ammonia at high rates adversely affects combustion performance and leads to elevated emissions of unburnt NH3, especially under high loads, which requires special treatment of this mode of combustion. This study aims to simulate combustion in a common rail direct injection (CRDI) dual-fuel engine, considering the fundamental geometry of the combustion chamber as well as fifteen (15) alternative proposed geometries to determine the configuration that exhibits superior engine performance during high-load conditions. The research presented here focuses on improving the understanding of the equations and mechanisms involved in the combustion of finely atomized jets of liquid fuel and on mastering the CONVERGETM code, which facilitates the simulation of this combustion process. By analyzing the effect of piston bowl shape on the performance and emissions of a diesel engine operating in dual fuel mode, this work combines knowledge of combustion phenomena with proficiency in the calculation code. To select the optimal geometry, an evaluation of the Swirl, Tumble, and Squish flow patterns was conducted for the fifteen (15) studied geometries. Variations in-cylinder pressure, heat release rate, turbulence kinetic energy, turbulence dissipation rate, and emission rates were observed, while thermal efficiency and specific fuel consumption were estimated as functions of crankshaft angle. To maximize thermal efficiency, a synergistic approach involving the enrichment of intake air with oxygen (O2) and the enrichment of primary fuel with hydrogen (H2) was implemented. Based on the results obtained, it is worth noting that the proposed geometry (T8_b8_d0.6/SW_8.0) outperformed the others in terms of flow quality, reduction of pollutants emitted with a reduction of more than 90% in unburnt NH3, and an impressive improvement in engine efficiency of more than 11%.Keywords: ammonia, hydrogen, combustion, dual-fuel engine, emissions
Procedia PDF Downloads 75698 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process
Authors: Stehr Melanie
Abstract:
The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.Keywords: B2B buying process, crisis, economics of information theory, information channel
Procedia PDF Downloads 184697 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: agricultural engineering, image processing, computer vision, flower detection
Procedia PDF Downloads 330696 Evaluation of Natural Frequency of Single and Grouped Helical Piles
Authors: Maryam Shahbazi, Amy B. Cerato
Abstract:
The importance of a systems’ natural frequency (fn) emerges when the vibration force frequency is equivalent to foundation's fn which causes response amplitude (resonance) that may cause irreversible damage to the structure. Several factors such as pile geometry (e.g., length and diameter), soil density, load magnitude, pile condition, and physical structure affect the fn of a soil-pile system; some of these parameters are evaluated in this study. Although experimental and analytical studies have assessed the fn of a soil-pile system, few have included individual and grouped helical piles. Thus, the current study aims to provide quantitative data on dynamic characteristics of helical pile-soil systems from full-scale shake table tests that will allow engineers to predict more realistic dynamic response under motions with variable frequency ranges. To evaluate the fn of single and grouped helical piles in dry dense sand, full-scale shake table tests were conducted in a laminar box (6.7 m x 3.0 m with 4.6 m high). Two different diameters (8.8 cm and 14 cm) helical piles were embedded in the soil box with corresponding lengths of 3.66m (excluding one pile with length of 3.96) and 4.27m. Different configurations were implemented to evaluate conditions such as fixed and pinned connections. In the group configuration, all four piles with similar geometry were tied together. Simulated real earthquake motions, in addition to white noise, were applied to evaluate the wide range of soil-pile system behavior. The Fast Fourier Transform (FFT) of measured time history responses using installed strain gages and accelerometers were used to evaluate fn. Both time-history records using accelerometer or strain gages were found to be acceptable for calculating fn. In this study, the existence of a pile reduced the fn of the soil slightly. Greater fn occurred on single piles with larger l/d ratios (higher slenderness ratio). Also, regardless of the connection type, the more slender pile group which is obviously surrounded by more soil, yielded higher natural frequencies under white noise, which may be due to exhibiting more passive soil resistance around it. Relatively speaking, within both pile groups, a pinned connection led to a lower fn than a fixed connection (e.g., for the same pile group the fn’s are 5.23Hz and 4.65Hz for fixed and pinned connections, respectively). Generally speaking, a stronger motion causes nonlinear behavior and degrades stiffness which reduces a pile’s fn; even more, reduction occurs in soil with a lower density. Moreover, fn of dense sand under white noise signal was obtained 5.03 which is reduced by 44% when an earthquake with the acceleration of 0.5g was applied. By knowing the factors affecting fn, the designer can effectively match the properties of the soil to a type of pile and structure to attempt to avoid resonance. The quantitative results in this study assist engineers in predicting a probable range of fn for helical pile foundations under potential future earthquake, and machine loading applied forces.Keywords: helical pile, natural frequency, pile group, shake table, stiffness
Procedia PDF Downloads 133695 Ionophore-Based Materials for Selective Optical Sensing of Iron(III)
Authors: Natalia Lukasik, Ewa Wagner-Wysiecka
Abstract:
Development of selective, fast-responsive, and economical sensors for diverse ions detection and determination is one of the most extensively studied areas due to its importance in the field of clinical, environmental and industrial analysis. Among chemical sensors, vast popularity has gained ionophore-based optical sensors, where the generated analytical signal is a consequence of the molecular recognition of ion by the ionophore. Change of color occurring during host-guest interactions allows for quantitative analysis and for 'naked-eye' detection without the need of using sophisticated equipment. An example of application of such sensors is colorimetric detection of iron(III) cations. Iron as one of the most significant trace elements plays roles in many biochemical processes. For these reasons, the development of reliable, fast, and selective methods of iron ions determination is highly demanded. Taking all mentioned above into account a chromogenic amide derivative of 3,4-dihydroxybenzoic acid was synthesized, and its ability to iron(III) recognition was tested. To the best of authors knowledge (according to chemical abstracts) the obtained ligand has not been described in the literature so far. The catechol moiety was introduced to the ligand structure in order to mimic the action of naturally occurring siderophores-iron(III)-selective receptors. The ligand–ion interactions were studied using spectroscopic methods: UV-Vis spectrophotometry and infrared spectroscopy. The spectrophotometric measurements revealed that the amide exhibits affinity to iron(III) in dimethyl sulfoxide and fully aqueous solution, what is manifested by the change of color from yellow to green. Incorporation of the tested amide into a polymeric matrix (cellulose triacetate) ensured effective recognition of iron(III) at pH 3 with the detection limit 1.58×10⁻⁵ M. For the obtained sensor material parameters like linear response range, response time, selectivity, and possibility of regeneration were determined. In order to evaluate the effect of the size of the sensing material on iron(III) detection nanospheres (in the form of nanoemulsion) containing the tested amide were also prepared. According to DLS (dynamic light scattering) measurements, the size of the nanospheres is 308.02 ± 0.67 nm. Work parameters of the nanospheres were determined and compared with cellulose triacetate-based material. Additionally, for fast, qualitative experiments the test strips were prepared by adsorption of the amide solution on a glass microfiber material. Visual limit of detection of iron(III) at pH 3 by the test strips was estimated at the level 10⁻⁴ M. In conclusion, reported here amide derived from 3,4- dihydroxybenzoic acid proved to be an effective candidate for optical sensing of iron(III) in fully aqueous solutions. N. L. kindly acknowledges financial support from National Science Centre Poland the grant no. 2017/01/X/ST4/01680. Authors thank for financial support from Gdansk University of Technology grant no. 032406.Keywords: ion-selective optode, iron(III) recognition, nanospheres, optical sensor
Procedia PDF Downloads 154694 Time Travel Testing: A Mechanism for Improving Renewal Experience
Authors: Aritra Majumdar
Abstract:
While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas
Procedia PDF Downloads 160693 Waste Management Option for Bioplastics Alongside Conventional Plastics
Authors: Dan Akesson, Gauthaman Kuzhanthaivelu, Martin Bohlen, Sunil K. Ramamoorthy
Abstract:
Bioplastics can be defined as polymers derived partly or completely from biomass. Bioplastics can be biodegradable such as polylactic acid (PLA) and polyhydroxyalkonoates (PHA); or non-biodegradable (biobased polyethylene (bio-PE), polypropylene (bio-PP), polyethylene terephthalate (bio-PET)). The usage of such bioplastics is expected to increase in the future due to new found interest in sustainable materials. At the same time, these plastics become a new type of waste in the recycling stream. Most countries do not have separate bioplastics collection for it to be recycled or composted. After a brief introduction of bioplastics such as PLA in the UK, these plastics are once again replaced by conventional plastics by many establishments due to lack of commercial composting. Recycling companies fear the contamination of conventional plastic in the recycling stream and they said they would have to invest in expensive new equipment to separate bioplastics and recycle it separately. This project studies what happens when bioplastics contaminate conventional plastics. Three commonly used conventional plastics were selected for this study: polyethylene (PE), polypropylene (PP) and polyethylene terephthalate (PET). In order to simulate contamination, two biopolymers, either polyhydroxyalkanoate (PHA) or thermoplastic starch (TPS) were blended with the conventional polymers. The amount of bioplastics in conventional plastics was either 1% or 5%. The blended plastics were processed again to see the effect of degradation. The results from contamination showed that the tensile strength and the modulus of PE was almost unaffected whereas the elongation is clearly reduced indicating the increase in brittleness of the plastic. Generally, it can be said that PP is slightly more sensitive to the contamination than PE. This can be explained by the fact that the melting point of PP is higher than for PE and as a consequence, the biopolymer will degrade more quickly. However, the reduction of the tensile properties for PP is relatively modest. Impact strength is generally a more sensitive test method towards contamination. Again, PE is relatively unaffected by the contamination but for PP there is a relatively large reduction of the impact properties already at 1% contamination. PET is polyester, and it is, by its very nature, more sensitive to degradation than PE and PP. PET also has a much higher melting point than PE and PP, and as a consequence, the biopolymer will quickly degrade at the processing temperature of PET. As for the tensile strength, PET can tolerate 1% contamination without any reduction of the tensile strength. However, when the impact strength is examined, it is clear that already at 1% contamination, there is a strong reduction of the properties. The thermal properties show the change in the crystallinity. The blends were also characterized by SEM. Biphasic morphology can be seen as the two polymers are not truly blendable which also contributes to reduced mechanical properties. The study shows that PE is relatively robust against contamination, while polypropylene (PP) is sensitive and polyethylene terephthalate (PET) can be quite sensitive towards contamination.Keywords: bioplastics, contamination, recycling, waste management
Procedia PDF Downloads 227692 Comparative Study of Active Release Technique and Myofascial Release Technique in Patients with Upper Trapezius Spasm
Authors: Harihara Prakash Ramanathan, Daksha Mishra, Ankita Dhaduk
Abstract:
Relevance: This qualitative study will educate the clinician in putting into practice the advanced method of movement science in restoring the function. Purpose: The purpose of this study is to compare the effectiveness of Active Release Technique and myofascial release technique on range of motion, neck function and pain in patients with upper trapezius spasm. Methods/Analysis: The study was approved by the institutional Human Research and Ethics committee. This study included sixty patients of age group between 20 to 55 years with upper trapezius spasm. Patients were randomly divided into two groups receiving Active Release Technique (Group A) and Myofascial Release Technique (Group B). The patients were treated for 1 week and three outcome measures ROM, pain and functional level were measured using Goniometer, Visual analog scale(VAS), Neck disability Index Questionnaire(NDI) respectively. Paired Sample 't' test was used to compare the differences of pre and post intervention values of Cervical Range of motion, Neck disability Index, Visual analog scale of Group A and Group B. Independent't' test was used to compare the differences between two groups in terms of improvement in cervical range of motion, decrease in visual analogue scale(VAS), decrease in Neck disability index score. Results: Both the groups showed statistically significant improvements in cervical ROM, reduction in pain and in NDI scores. However, mean change in Cervical flexion, cervical extension, right side flexion, left side flexion, right side rotation, left side rotation, pain, neck disability level showed statistically significant improvement (P < 0. 05)) in the patients who received Active Release Technique as compared to Myofascial release technique. Discussion and conclusions: In present study, the average improvement immediately post intervention is significantly greater as compared to before treatment but there is even more improvement after seven sessions as compared to single session. Hence, this proves that several sessions of Manual techniques are necessary to produce clinically relevant results. Active release technique help to reduce the pain threshold by removing adhesion and promote normal tissue extensibility. The act of tensioning and compressing the affected tissue both with digital contact and through the active movement performed by the patient can be a plausible mechanism for tissue healing in this study. This study concluded that both Active Release Technique (ART) and Myofascial release technique (MFR) are equally effective in managing upper trapezius muscle spasm, but more improvement can be achieved by Active Release Technique (ART). Impact and Implications: Active Release Technique can be adopted as mainstay of treatment approach in treating trapezius spasm for faster relief and improving the functional status.Keywords: trapezius spasm, myofascial release, active release technique, pain
Procedia PDF Downloads 274691 The Effect of Mindfulness-Based Interventions for Individuals with Tourette Syndrome: A Scoping Review
Authors: Ilana Singer, Anastasia Lučić, Julie Leclerc
Abstract:
Introduction: Tics, characterized by repetitive, sudden, non-voluntary motor movements or vocalizations, are prevalent in chronic tic disorder (CT) and Tourette Syndrome (TS). These neurodevelopmental disorders often coexist with various psychiatric conditions, leading to challenges and reduced quality of life. While medication in conjunction with behavioral interventions, such as Habit Reversal Training (HRT), Exposure Response Prevention (ERP), and Comprehensive Behavioral Intervention for Tics (CBIT), has shown efficacy, a significant proportion of patients experience persistent tics. Thus, innovative treatment approaches are necessary to improve therapeutic outcomes, such as mindfulness-based approaches. Nonetheless, the effectiveness of mindfulness-based interventions in the context of CT and TS remains understudied. Objective: The objective of this scoping review is to provide an overview of the current state of research on mindfulness-based interventions for CT and TS, identify knowledge and evidence gaps, discuss the effectiveness of mindfulness-based interventions with other treatment options, and discuss implications for clinical practice and policy development. Method: Using guidelines from Peters (2020) and the PRISMA-ScR, a scoping review was conducted. Multiple electronic databases were searched from inception until June 2023, including MEDLINE, EMBASE, PsychInfo, Global Health, PubMed, Web of Science, and Érudit. Inclusion criteria were applied to select relevant studies, and data extraction was independently performed by two reviewers. Results: Five papers were included in the study. Firstly, we found that mindfulness interventions were found to be effective in reducing anxiety and depression while enhancing overall well-being in individuals with tics. Furthermore, the review highlighted the potential role of mindfulness in enhancing functional connectivity within the Default Mode Network (DMN) as a compensatory function in TS patients. This suggests that mindfulness interventions may complement and support traditional therapeutic approaches, particularly HRT, by positively influencing brain networks associated with tic regulation and control. Conclusion: This scoping review contributes to the understanding of the effectiveness of mindfulness-based interventions in managing CT and TS. By identifying research gaps, this review can guide future investigations and interventions to improve outcomes for individuals with CT or TS. Overall, these findings emphasize the potential benefits of incorporating mindfulness-based interventions as a smaller subset within comprehensive treatment strategies. However, it is essential to acknowledge the limitations of this scoping review, such as the exclusion of a pre-established protocol and the limited number of studies available for inclusion. Further research and clinical exploration are necessary to better understand the specific mechanisms and optimal integration of mindfulness-based interventions with existing behavioral interventions for this population.Keywords: scoping reviews, Tourette Syndrome, tics, mindfulness-based, therapy, intervention
Procedia PDF Downloads 84690 Nurture Early for Optimal Nutrition: A Community-Based Randomized Controlled Trial to Improve Infant Feeding and Care Practices Using Participatory Learning and Actions Approach
Authors: Priyanka Patil, Logan Manikam
Abstract:
Background: The first 1000 days of life are a critical window and can result in adverse health consequences due to inadequate nutrition. South-Asian (SA) communities face significant health disparities, particularly in maternal and child health. Community-based interventions, often employing Participatory-Learning and Action (PLA) approaches, have effectively addressed health inequalities in lower-income nations. The aim of this study was to assess the feasibility of implementing a PLA intervention to improve infant feeding and care practices in SA communities living in London. Methods: Comprehensive analyses were conducted to assess the feasibility/fidelity of this pilot randomized controlled trial. Summary statistics were computed to compare key metrics, including participant consent rates, attendance, retention, intervention support, and perceived effectiveness, against predefined progression rules guiding toward a definitive trial. Secondary outcomes were analyzed, drawing insights from multiple sources, such as The Children’s-Eating-Behaviour Questionnaire (CEBQ), Parental-Feeding-Style Questionnaires (PFSQ), Food-diary, and the Equality-Impact-Assessment (EIA) tool. A video analysis of children's mealtime behavior trends was conducted. Feedback interviews were collected from study participants. Results: Process-outcome measures met predefined progression rules for a definitive trial, which deemed the intervention as feasible and acceptable. The secondary outcomes analysis revealed no significant changes in children's BMI z-scores. This could be attributed to the abbreviated follow-up period of 6 months, reduced from 12 months, due to COVID-19-related delays. CEBQ analysis showed increased food responsiveness, along with decreased emotional over/undereating. A similar trend was observed in PFSQ. The EIA tool found no potential discrimination areas, and video analysis revealed a decrease in force-feeding practices. Participant feedback revealed improved awareness and knowledge sharing. Conclusion: This study demonstrates that a co-adapted PLA intervention is feasible and well-received in optimizing infant-care practices among South-Asian community members in a high-income country. These findings highlight the potential of community-based interventions to enhance health outcomes, promoting health equity.Keywords: child health, childhood obesity, community-based, infant nutrition
Procedia PDF Downloads 57689 Progress Towards Optimizing and Standardizing Fiducial Placement Geometry in Prostate, Renal, and Pancreatic Cancer
Authors: Shiva Naidoo, Kristena Yossef, Grimm Jimm, Mirza Wasique, Eric Kemmerer, Joshua Obuch, Anand Mahadevan
Abstract:
Background: Fiducial markers effectively enhance tumor target visibility prior to Stereotactic Body Radiation Therapy or Proton therapy. To streamline clinical practice, fiducial placement guidelines from a robotic radiosurgery vendor were examined with the goals of optimizing and standardizing feasible geometries for each treatment indication. Clinical examples of prostate, renal, and pancreatic cases are presented. Methods: Vendor guidelines (Accuray, Sunnyvale, Ca) suggest implantation of 4–6 fiducials at least 20 mm apart, with at least a 15-degree angular difference between fiducials, within 50 mm or less from the target centroid, to ensure that any potential fiducial motion (e.g., from respiration or abdominal/pelvic pressures) will mimic target motion. Also recommended is that all fiducials can be seen in 45-degree oblique views with no overlap to coincide with the robotic radiosurgery imaging planes. For the prostate, a standardized geometry that meets all these objectives is a 2 cm-by-2 cm square in the coronal plane. The transperineal implant of two pairs of preloaded tandem fiducials makes the 2 cm-by-2 cm square geometry clinically feasible. This technique may be applied for renal cancer, except repositioned in a sagittal plane, with the retroperitoneal placement of the fiducials into the tumor. Pancreatic fiducial placement via endoscopic ultrasound (EUS) is technically more challenging, as fiducial placement is operator-dependent, and lesion access may be limited by adjacent vasculature, tumor location, or restricted mobility of the EUS probe in the duodenum. Fluoroscopically assisted fiducial placement during EUS can help ensure fiducial markers are deployed with optimal geometry and visualization. Results: Among the first 22 fiducial cases on a newly installed robotic radiosurgery system, live x-ray images for all nine prostatic cases had excellent fiducial visualization at the treatment console. Renal and pancreatic fiducials were not as clearly visible due to difficult target access and smaller caliber insertion needle/fiducial usage. The geometry of the first prostate case was used to ensure accurate geometric marker placement for the remaining 8 cases. Initially, some of the renal and pancreatic fiducials were closer than the 20 mm recommendation, and interactive feedback with the proceduralists led to subsequent fiducials being too far to the edge of the tumor. Further feedback and discussion of all cases are being used to help guide standardized geometries and achieve ideal fiducial placement. Conclusion: The ideal tradeoffs of fiducial visibility versus the thinnest possible gauge needle to avoid complications needs to be systematically optimized among all patients, particularly in regards to body habitus. Multidisciplinary collaboration among proceduralists and radiation oncologists can lead to improved outcomes.Keywords: fiducial, prostate cancer, renal cancer, pancreatic cancer, radiotherapy
Procedia PDF Downloads 93688 Interdependence of Vocational Skills and Employability Skills: Example of an Industrial Training Centre in Central India
Authors: Mahesh Vishwakarma, Sadhana Vishwakarma
Abstract:
Vocational education includes all kind of education which can help students to acquire skills related to a certain profession, art, or activity so that they are able to exercise that profession, art or activity after acquiring such qualification. However, in this global economy of the modern world, job seekers are expected to have certain soft skills over and above the technical knowledge and skills acquired in their areas of expertise. These soft skills include but not limited to interpersonal communication, understanding, personal attributes, problem-solving, working in team, quick adaptability to the workplace environment, and other. Not only the hands-on, job-related skills, and competencies are now being sought by the employers, but also a complex of attitudinal dispositions and affective traits are being looked by them in their prospective employees. This study was performed to identify the employability skills of technical students from an Industrial Training Centre (ITC) in central India. It also aimed to convey a message to the students currently on the role, that for them to remain relevant in the job market, they would need to constantly adapt to changes and evolving requirements in the work environment, including the use of updated technologies. Five hypotheses were formulated and tested on the employability skills of students as a function of gender, trade, work experience, personal attributes, and IT skills. Data were gathered with the help of center’s training officers who approached 200 recently graduated students from the center and administered the instrument to students. All 200 respondents returned the completed instrument. The instrument used for the study consisted of 2 sections; demographic details and employability skills. To measure the employability skills of the trainees, the instrument was developed by referring to the several instruments developed by the past researchers for similar studies. The 1st section of the instrument of demographic details recorded age, gender, trade, year of passing, interviews faced, and employment status of the respondents. The 2nd section of the instrument on employability skills was categorized into seven specific skills: basic vocational skills; personal attributes; imagination skills; optimal management of resources; information-technology skills; interpersonal skills; adapting to new technologies. The reliability and validity of the instrument were checked. The findings revealed valuable information on the relationship and interdependence of vocational education and employability skills of students in the central Indian scenario. The findings revealed a valuable information on supplementing the existing vocational education programs with few soft skills and competencies so as to develop a superior workforce much better equipped to face the job market. The findings of the study can be used as an example by the management of government and private industrial training centers operating in the other parts of the Asian region. Future research can be undertaken on a greater population base from different geographical regions and backgrounds for an enhanced outcome.Keywords: employability skills, vocational education, industrial training centers, students
Procedia PDF Downloads 133687 A Study of the Effect of the Flipped Classroom on Mixed Abilities Classes in Compulsory Secondary Education in Italy
Authors: Giacoma Pace
Abstract:
The research seeks to evaluate whether students with impairments can achieve enhanced academic progress by actively engaging in collaborative problem-solving activities with teachers and peers, to overcome the obstacles rooted in socio-economic disparities. Furthermore, the research underscores the significance of fostering students' self-awareness regarding their learning process and encourages teachers to adopt a more interactive teaching approach. The research also posits that reducing conventional face-to-face lessons can motivate students to explore alternative learning methods, such as collaborative teamwork and peer education within the classroom. To address socio-cultural barriers it is imperative to assess their internet access and possession of technological devices, as these factors can contribute to a digital divide. The research features a case study of a Flipped Classroom Learning Unit, administered to six third-year high school classes: Scientific Lyceum, Technical School, and Vocational School, within the city of Turin, Italy. Data are about teachers and the students involved in the case study, some impaired students in each class, level of entry, students’ performance and attitude before using Flipped Classrooms, level of motivation, family’s involvement level, teachers’ attitude towards Flipped Classroom, goal obtained, the pros and cons of such activities, technology availability. The selected schools were contacted; meetings for the English teachers to gather information about their attitude and knowledge of the Flipped Classroom approach. Questionnaires to teachers and IT staff were administered. The information gathered, was used to outline the profile of the subjects involved in the study and was further compared with the second step of the study made up of a study conducted with the classes of the selected schools. The learning unit is the same, structure and content are decided together with the English colleagues of the classes involved. The pacing and content are matched in every lesson and all the classes participate in the same labs, use the same materials, homework, same assessment by summative and formative testing. Each step follows a precise scheme, in order to be as reliable as possible. The outcome of the case study will be statistically organised. The case study is accompanied by a study on the literature concerning EFL approaches and the Flipped Classroom. Document analysis method was employed, i.e. a qualitative research method in which printed and/or electronic documents containing information about the research subject are reviewed and evaluated with a systematic procedure. Articles in the Web of Science Core Collection, Education Resources Information Center (ERIC), Scopus and Science Direct databases were searched in order to determine the documents to be examined (years considered 2000-2022).Keywords: flipped classroom, impaired, inclusivity, peer instruction
Procedia PDF Downloads 53686 Inverterless Grid Compatible Micro Turbine Generator
Authors: S. Ozeri, D. Shmilovitz
Abstract:
Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.Keywords: gas turbine, inverter, power multiplier, distributed generation
Procedia PDF Downloads 240685 Non-Steroidal Microtubule Disrupting Analogues Induce Programmed Cell Death in Breast and Lung Cancer Cell Lines
Authors: Marcel Verwey, Anna M. Joubert, Elsie M. Nolte, Wolfgang Dohle, Barry V. L. Potter, Anne E. Theron
Abstract:
A tetrahydroisoquinolinone (THIQ) core can be used to mimic the A,B-ring of colchicine site-binding microtubule disruptors such as 2-methoxyestradiol in the design of anti-cancer agents. Steroidomimeric microtubule disruptors were synthesized by introducing C'2 and C'3 of the steroidal A-ring to C'6 and C'7 of the THIQ core and by introducing a decorated hydrogen bond acceptor motif projecting from the steroidal D-ring to N'2. For this in vitro study, four non-steroidal THIQ-based analogues were investigated and comparative studies were done between the non-sulphamoylated compound STX 3450 and the sulphamoylated compounds STX 2895, STX 3329 and STX 3451. The objective of this study was to investigate the modes of cell death induced by these four THIQ-based analogues in A549 lung carcinoma epithelial cells and metastatic breast adenocarcinoma MDA-MB-231 cells. Cytotoxicity studies to determine the half maximal growth inhibitory concentrations were done using spectrophotometric quantification via crystal violet staining and lactate dehydrogenase (LDH) assays. Microtubule integrity and morphologic changes of exposed cells were investigated using polarization-optical transmitted light differential interference contrast microscopy, transmission electron microscopy and confocal microscopy. Flow cytometric quantification was used to determine apoptosis induction and the effect that THIQ-based analogues have on cell cycle progression. Signal transduction pathways were elucidated by quantification of the mitochondrial membrane integrity, cytochrome c release and caspase 3, -6 and -8 activation. Induction of autophagic cell death by the THIQ-based analogues was investigated by morphological assessment of fluorescent monodansylcadaverine (MDC) staining of acidic vacuoles and by quantifying aggresome formation via flow cytometry. Results revealed that these non-steroidal microtubule disrupting analogues inhibited 50% of cell growth at nanomolar concentrations. Immunofluorescence microscopy indicated microtubule depolarization and the resultant mitotic arrest was further confirmed through cell cycle analysis. Apoptosis induction via the intrinsic pathway was observed due to depolarization of the mitochondrial membrane, induction of cytochrome c release as well as, caspase 3 activation. Potential involvement of programmed cell death type II was observed due to the presence of acidic vacuoles and aggresome formation. Necrotic cell death did not contribute significantly, indicated by stable LDH levels. This in vitro study revealed the induction of the intrinsic apoptotic pathway as well as possible involvement of autophagy after exposure to these THIQ-based analogues in both MDA-MB-231- and A549 cells. Further investigation of this series of anticancer drugs still needs to be conducted to elucidate the temporal, mechanistic and functional crosstalk mechanisms between the two observed programmed cell deaths pathways.Keywords: apoptosis, autophagy, cancer, microtubule disruptor
Procedia PDF Downloads 253684 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 62683 Network Impact of a Social Innovation Initiative in Rural Areas of Southern Italy
Authors: A. M. Andriano, M. Lombardi, A. Lopolito, M. Prosperi, A. Stasi, E. Iannuzzi
Abstract:
In according to the scientific debate on the definition of Social Innovation (SI), the present paper identifies SI as new ideas (products, services, and models) that simultaneously meet social needs and create new social relationships or collaborations. This concept offers important tools to unravel the difficult condition for the agricultural sector in marginalized areas, characterized by the abandonment of activities, low level of farmer education, and low generational renewal, hampering new territorial strategies addressed at and integrated and sustainable development. Models of SI in agriculture, starting from bottom up approach or from the community, are considered to represent the driving force of an ecological and digital revolution. A system based on SI may be able to grasp and satisfy individual and social needs and to promote new forms of entrepreneurship. In this context, Vazapp ('Go Hoeing') is an emerging SI model in southern Italy that promotes solutions for satisfying needs of farmers and facilitates their relationships (creation of network). The Vazapp’s initiative, considered in this study, is the Contadinners ('Farmer’s dinners'), a dinner held at farmer’s house where stakeholders living in the surrounding area know each other and are able to build a network for possible future professional collaborations. The aim of the paper is to identify the evolution of farmers’ relationships, both quantitatively and qualitatively, because of the Contadinner’s initiative organized by Vazapp. To this end, the study adopts the Social Network Analysis (SNA) methodology by using UCINET (Version 6.667) software to analyze the relational structure. Data collection was realized through a questionnaire distributed to 387 participants in the twenty 'Contadinners', held from February 2016 to June 2018. The response rate to the survey was about 50% of farmers. The elaboration data was focused on different aspects, such as: a) the measurement of relational reciprocity among the farmers using the symmetrize method of answers; b) the measurement of the answer reliability using the dichotomize method; c) the description of evolution of social capital using the cohesion method; d) the clustering of the Contadinners' participants in followers and not-followers of Vazapp to evaluate its impact on the local social capital. The results concern the effectiveness of this initiative in generating trustworthy relationships within the rural area of southern Italy, typically affected by individualism and mistrust. The number of relationships represents the quantitative indicator to define the dimension of the network development; while the typologies of relationships (from simple friendship to formal collaborations, for branding new cooperation initiatives) represents the qualitative indicator that offers a diversified perspective of the network impact. From the analysis carried out, Vazapp’s initiative represents surely a virtuous SI model to catalyze the relationships within the rural areas and to develop entrepreneurship based on the real needs of the community. Procedia PDF Downloads 112682 Lactic Acid Solution and Aromatic Vinegar Nebulization to Improve Hunted Wild Boar Carcass Hygiene at Game-Handling Establishment: Preliminary Results
Authors: Rossana Roila, Raffaella Branciari, Lorenzo Cardinali, David Ranucci
Abstract:
The wild boar (Sus scrofa) population has strongly increased across Europe in the last decades, also causing severe fauna management issues. In central Italy, wild boar is the main hunted wild game species, with approximately 40,000 animals killed per year only in the Umbria region. The meat of the game is characterized by high-quality nutritional value as well as peculiar taste and aroma, largely appreciated by consumers. This type of meat and products thereof can meet the current consumers’ demand for higher quality foodstuff, not only from a nutritional and sensory point of view but also in relation to environmental sustainability, the non-use of chemicals, and animal welfare. The game meat production chain is characterized by some gaps from a hygienic point of view: the harvest process is usually conducted in a wild environment where animals can be more easily contaminated during hunting and subsequent practices. The definition and implementation of a certified and controlled supply chain could ensure quality, traceability and safety for the final consumer and therefore promote game meat products. According to European legislation in some animal species, such as bovine, the use of weak acid solutions for carcass decontamination is envisaged in order to ensure the maintenance of optimal hygienic characteristics. A preliminary study was carried out to evaluate the applicability of similar strategies to control the hygienic level of wild boar carcasses. The carcasses, harvested according to the selective method and processed into the game-handling establishment, were treated by nebulization with two different solutions: a 2% food-grade lactic acid solution and aromatic vinegar. Swab samples were performed before treatment and in different moments after-treatment of the carcasses surfaces and subsequently tested for Total Aerobic Mesophilic Load, Total Aerobic Psychrophilic Load, Enterobacteriaceae, Staphylococcus spp. and lactic acid bacteria. The results obtained for the targeted microbial populations showed a positive effect of the application of the lactic acid solution on all the populations investigated, while aromatic vinegar showed a lower effect on bacterial growth. This study could lay the foundations for the optimization of the use of a lactic acid solution to treat wild boar carcasses aiming to guarantee good hygienic level and safety of meat.Keywords: game meat, food safety, process hygiene criteria, microbial population, microbial growth, food control
Procedia PDF Downloads 160681 Investigation of Dry-Blanching and Freezing Methods of Fruits
Authors: Epameinondas Xanthakis, Erik Kaunisto, Alain Le-Bail, Lilia Ahrné
Abstract:
Fruits and vegetables are characterized as perishable food matrices due to their short shelf life as several deterioration mechanisms are being involved. Prior to the common preservation methods like freezing or canning, fruits and vegetables are being blanched in order to inactivate deteriorative enzymes. Both conventional blanching pretreatments and conventional freezing methods hide drawbacks behind their beneficial impacts on the preservation of those matrices. Conventional blanching methods may require longer processing times, leaching of minerals and nutrients due to the contact with the warm water which in turn leads to effluent production with large BOD. An important issue of freezing technologies is the size of the formed ice crystals which is also critical for the final quality of the frozen food as it can cause irreversible damage to the cellular structure and subsequently to degrade the texture and the colour of the product. Herein, the developed microwave blanching methodology and the results regarding quality aspects and enzyme inactivation will be presented. Moreover, heat transfer phenomena, mass balance, temperature distribution, and enzyme inactivation (such as Pectin Methyl Esterase and Ascorbic Acid Oxidase) of our microwave blanching approach will be evaluated based on measurements and computer modelling. The present work is part of the COLDμWAVE project which aims to the development of an innovative environmentally sustainable process for blanching and freezing of fruits and vegetables with improved textural and nutritional quality. In this context, COLDµWAVE will develop tailored equipment for MW blanching of vegetables that has very high energy efficiency and no water consumption. Furthermore, the next steps of this project regarding the development of innovative pathways in MW assisted freezing to improve the quality of frozen vegetables, by exploring in depth previous results acquired by the authors, will be presented. The application of MW assisted freezing process on fruits and vegetables it is expected to lead to improved quality characteristics compared to the conventional freezing. Acknowledgments: COLDμWAVE has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grand agreement No 660067.Keywords: blanching, freezing, fruits, microwave blanching, microwave
Procedia PDF Downloads 267680 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 177679 Conversational Assistive Technology of Visually Impaired Person for Social Interaction
Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer
Abstract:
Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.Keywords: dataset, visually impaired person, natural language process, human activity recognition
Procedia PDF Downloads 60678 Low Plastic Deformation Energy to Induce High Superficial Strain on AZ31 Magnesium Alloy Sheet
Authors: Emigdio Mendoza, Patricia Fernandez, Cristian Gomez
Abstract:
Magnesium alloys have generated great interest for several industrial applications because their high specific strength and low density make them a very attractive alternative for the manufacture of various components; however, these alloys present a limitation with their hexagonal crystal structure that limits the deformation mechanisms at room temperature likewise the molding components alternatives, it is for this reason that severe plastic deformation processes have taken a huge relevance recently because these, allow high deformation rates to be applied that induce microstructural changes where the deficiency in the sliding systems is compensated with crystallographic grains reorientations or crystal twinning. The present study reports a statistical analysis of process temperature, number of passes and shear angle with respect to the shear stress in severe plastic deformation process denominated 'Equal Channel Angular Sheet Drawing (ECASD)' applied to the magnesium alloy AZ31B through Python Statsmodels libraries, additionally a Post-Hoc range test is performed using the Tukey statistical test. Statistical results show that each variable has a p-value lower than 0.05, which allows comparing the average values of shear stresses obtained, which are in the range of 7.37 MPa to 12.23 MPa, lower values in comparison to others severe plastic deformation processes reported in the literature, considering a value of 157.53 MPa as the average creep stress for AZ31B alloy. However, a higher stress level is required when the sheets are processed using a shear angle of 150°, due to a higher level of adjustment applied for the shear die of 150°. Temperature and shear passes are important variables as well, but there is no significant impact on the level of stress applied during the ECASD process. In the processing of AZ31B magnesium alloy sheets, ECASD technique is evidenced as a viable alternative in the modification of the elasto-plastic properties of this alloy, promoting the weakening of the basal texture, which means, a better response to deformation, whereby, during the manufacture of parts by drawing or stamping processes the formation of cracks on the surface can be reduced, presenting an adequate mechanical performance.Keywords: plastic deformation, strain, sheet drawing, magnesium
Procedia PDF Downloads 110