Search results for: technology and innovation
810 The Effect of Applying the Electronic Supply System on the Performance of the Supply Chain in Health Organizations
Authors: Sameh S. Namnqani, Yaqoob Y. Abobakar, Ahmed M. Alsewehri, Khaled M. AlQethami
Abstract:
The main objective of this research is to know the impact of the application of the electronic supply system on the performance of the supply department of health organizations. To reach this goal, the study adopted independent variables to measure the dependent variable (performance of the supply department), namely: integration with suppliers, integration with intermediaries and distributors and knowledge of supply size, inventory, and demand. The study used the descriptive method and was aided by the questionnaire tool that was distributed to a sample of workers in the Supply Chain Management Department of King Abdullah Medical City. After the statistical analysis, the results showed that: The 70 sample members strongly agree with the (electronic integration with suppliers) axis with a p-value of 0.001, especially with regard to the following: Opening formal and informal communication channels between management and suppliers (Mean 4.59) and exchanging information with suppliers with transparency and clarity (Mean 4.50). It also clarified that the sample members agree on the axis of (electronic integration with brokers and distributors) with a p-value of 0.001 and this is represented in the following elements: Exchange of information between management, brokers and distributors with transparency, clarity (Mean 4.18) , and finding a close cooperation relationship between management, brokers and distributors (Mean 4.13). The results also indicated that the respondents agreed to some extent on the axis (knowledge of the size of supply, stock, and demand) with a p-value of 0.001. It also indicated that the respondents strongly agree with the existence of a relationship between electronic procurement and (the performance of the procurement department in health organizations) with a p-value of 0.001, which is represented in the following: transparency and clarity in dealing with suppliers and intermediaries to prevent fraud and manipulation (Mean 4.50) and reduce the costs of supplying the needs of the health organization (Mean 4.50). From the results, the study recommended several recommendations, the most important of which are: that health organizations work to increase the level of information sharing between them and suppliers in order to achieve the implementation of electronic procurement in the supply management of health organizations. Attention to using electronic data interchange methods and using modern programs that make supply management able to exchange information with brokers and distributors to find out the volume of supply, inventory, and demand. To know the volume of supply, inventory, and demand, it recommended the application of scientific methods of supply for storage. Take advantage of information technology, for example, electronic data exchange techniques and documents, where it can help in contact with suppliers, brokers, and distributors, and know the volume of supply, inventory, and demand, which contributes to improving the performance of the supply department in health organizations.Keywords: healthcare supply chain, performance, electronic system, ERP
Procedia PDF Downloads 136809 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 171808 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 155807 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method
Authors: Emmanuel Ophel Gilbert, Williams Speret
Abstract:
The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids
Procedia PDF Downloads 177806 The Importance of Efficient and Sustainable Water Resources Management and the Role of Artificial Intelligence in Preventing Forced Migration
Authors: Fateme Aysin Anka, Farzad Kiani
Abstract:
Forced migration is a situation in which people are forced to leave their homes against their will due to political conflicts, wars and conflicts, natural disasters, climate change, economic crises, or other emergencies. This type of migration takes place under conditions where people cannot lead a sustainable life due to reasons such as security, shelter and meeting their basic needs. This type of migration may occur in connection with different factors that affect people's living conditions. In addition to these general and widespread reasons, water security and resources will be one that is starting now and will be encountered more and more in the future. Forced migration may occur due to insufficient or depleted water resources in the areas where people live. In this case, people's living conditions become unsustainable, and they may have to go elsewhere, as they cannot obtain their basic needs, such as drinking water, water used for agriculture and industry. To cope with these situations, it is important to minimize the causes, as international organizations and societies must provide assistance (for example, humanitarian aid, shelter, medical support and education) and protection to address (or mitigate) this problem. From the international perspective, plans such as the Green New Deal (GND) and the European Green Deal (EGD) draw attention to the need for people to live equally in a cleaner and greener world. Especially recently, with the advancement of technology, science and methods have become more efficient. In this regard, in this article, a multidisciplinary case model is presented by reinforcing the water problem with an engineering approach within the framework of the social dimension. It is worth emphasizing that this problem is largely linked to climate change and the lack of a sustainable water management perspective. As a matter of fact, the United Nations Development Agency (UNDA) draws attention to this problem in its universally accepted sustainable development goals. Therefore, an artificial intelligence-based approach has been applied to solve this problem by focusing on the water management problem. The most general but also important aspect in the management of water resources is its correct consumption. In this context, the artificial intelligence-based system undertakes tasks such as water demand forecasting and distribution management, emergency and crisis management, water pollution detection and prevention, and maintenance and repair control and forecasting.Keywords: water resource management, forced migration, multidisciplinary studies, artificial intelligence
Procedia PDF Downloads 87805 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes
Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini
Abstract:
Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.Keywords: covid-19, stroke, virtual reality, rehabilitation
Procedia PDF Downloads 143804 The Ideal Memory Substitute for Computer Memory Hierarchy
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.Keywords: cache, memory-hierarchy, memory, registers, storage
Procedia PDF Downloads 167803 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization
Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade
Abstract:
The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy
Procedia PDF Downloads 119802 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23
Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov
Abstract:
We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.Keywords: corona, flares, solar activity, X-ray emission
Procedia PDF Downloads 345801 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 491800 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 100799 Electronic Waste Analysis And Characterization Study: Management Input For Highly Urbanized Cities
Authors: Jilbert Novelero, Oliver Mariano
Abstract:
In a world where technological evolution and competition to create innovative products are at its peak, problems on Electronic Waste (E-Waste) are now becoming a global concern. E-waste is said to be any electrical or electronic devices that have reached the terminal of its useful life. The major issue are the volume and the raw materials used in crafting E-waste which is non-biodegradable and contains hazardous substances that are toxic to human health and the environment. The objective of this study is to gather baseline data in terms of the composition of E-waste in the solid waste stream and to determine the top 5 E-waste categories in a highly urbanized city. Recommendations in managing these wastes for its reduction were provided which may serve as a guide for acceptance and implementation in the locality. Pasig City was the chosen beneficiary of the research output and through the collaboration of the City Government of Pasig and its Solid Waste Management Office (SWMO); the researcher successfully conducted the Electronic Waste Analysis and Characterization Study (E-WACS) to achieve the objectives. E-WACS that was conducted on April 2019 showed that E-waste ranked 4th which comprises the 10.39% of the overall solid waste volume. Out of 345, 127.24kg which is the total daily domestic waste generation in the city, E-waste covers 35,858.72kg. Moreover, an average of 40 grams was determined to be the E-waste generation per person per day. The top 5 E-waste categories were then classified after the analysis. The category which ranked first is the office and telecommunications equipment that contained the 63.18% of the total generated E-waste. Second in ranking was the household appliances category with 21.13% composition. Third was the lighting devices category with 8.17%. Fourth on ranking was the consumer electronics and batteries category which was composed of 5.97% and fifth was the wires and cables category where it comprised the 1.41% of the average generated E-waste samples. One of the recommendations provided in this research is the implementation of the Pasig City Waste Advantage Card. The card can be used as a privilege card and earned points can be converted to avail of and enjoy services such as haircut, massage, dental services, medical check-up, and etc. Another recommendation raised is for the LGU to encourage a communication or dialogue with the technology and electronics manufacturers and distributors and international and local companies to plan the retrieval and disposal of the E-wastes in accordance with the Extended Producer Responsibility (EPR) policy where producers are given significant responsibilities for the treatment and disposal of post-consumer products.Keywords: E-waste, E-WACS, E-waste characterization, electronic waste, electronic waste analysis
Procedia PDF Downloads 118798 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 131797 The Impact of Technology and Artificial Intelligence on Children in Autism
Authors: Dina Moheb Rashid Michael
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 58796 Rapid Plasmonic Colorimetric Glucose Biosensor via Biocatalytic Enlargement of Gold Nanostars
Authors: Masauso Moses Phiri
Abstract:
Frequent glucose monitoring is essential to the management of diabetes. Plasmonic enzyme-based glucose biosensors have the advantages of greater specificity, simplicity and rapidity. The aim of this study was to develop a rapid plasmonic colorimetric glucose biosensor based on biocatalytic enlargement of AuNS guided by GOx. Gold nanoparticles of 18 nm in diameter were synthesized using the citrate method. Using these as seeds, a modified seeded method for the synthesis of monodispersed gold nanostars was followed. Both the spherical and star-shaped nanoparticles were characterized using ultra-violet visible spectroscopy, agarose gel electrophoresis, dynamic light scattering, high-resolution transmission electron microscopy and energy-dispersive X-ray spectroscopy. The feasibility of a plasmonic colorimetric assay through growth of AuNS by silver coating in the presence of hydrogen peroxide was investigated by several control and optimization experiments. Conditions for excellent sensing such as the concentration of the detection solution in the presence of 20 µL AuNS, 10 mM of 2-(N-morpholino) ethanesulfonic acid (MES), ammonia and hydrogen peroxide were optimized. Using the optimized conditions, the glucose assay was developed by adding 5mM of GOx to the solution and varying concentrations of glucose to it. Kinetic readings, as well as color changes, were observed. The results showed that the absorbance values of the AuNS were blue shifting and increasing as the concentration of glucose was elevated. Control experiments indicated no growth of AuNS in the absence of GOx, glucose or molecular O₂. Increased glucose concentration led to an enhanced growth of AuNS. The detection of glucose was also done by naked-eye. The color development was near complete in ± 10 minutes. The kinetic readings which were monitored at 450 and 560 nm showed that the assay could discriminate between different concentrations of glucose by ± 50 seconds and near complete at ± 120 seconds. A calibration curve for the qualitative measurement of glucose was derived. The magnitude of wavelength shifts and absorbance values increased concomitantly with glucose concentrations until 90 µg/mL. Beyond that, it leveled off. The lowest amount of glucose that could produce a blue shift in the localized surface plasmon resonance (LSPR) absorption maxima was found to be 10 – 90 µg/mL. The limit of detection was 0.12 µg/mL. This enabled the construction of a direct sensitivity plasmonic colorimetric detection of glucose using AuNS that was rapid, sensitive and cost-effective with naked-eye detection. It has great potential for transfer of technology for point-of-care devices.Keywords: colorimetric, gold nanostars, glucose, glucose oxidase, plasmonic
Procedia PDF Downloads 153795 Useful Lessons from the Success of Physics Outreach in Jamaica
Authors: M. J. Ponnambalam
Abstract:
Physics Outreach in Jamaica has nearly tripled the number of students doing Introductory Calculus-based Physics at the University of the West Indies (UWI, Mona) within 5 years, and thus has shown the importance of Physics Teaching & Learning in Informal Settings. In 1899, the first president of the American Physical Society called Physics, “the science above all sciences.” Sure enough, exactly one hundred years later, Time magazine proclaims Albert Einstein, “Person of the Century.” Unfortunately, Physics seems to be losing that glow in this century. Many countries, big and small, are finding it difficult to attract bright young minds to pursue Physics. At UWI, Mona, the number of students in first year Physics dropped to an all-time low of 81 in 2006, from more than 200 in the nineteen eighties, spelling disaster for the Physics Department! The author of this paper launched an aggressive Physics Outreach that same year, aimed at conveying to the students and the general public the following messages: i) Physics is an exciting intellectual enterprise, full of fun and delight. ii) Physics is very helpful in understanding how things like TV, CD player, car, computer, X-ray, CT scan, MRI, etc. work. iii) The critical and analytical thinking developed in the study of Physics is of inestimable value in almost any field. iv) Physics is the core subject for Science and Technology, and hence of national development. Science Literacy is a ‘must’ for any nation in the 21st century. Hence, the Physics Outreach aims at reaching out to every person, through every possible means. The Outreach work is split into the following target groups: i) Universities, ii) High Schools iii) Middle Schools, iv) Primary Schools, v) General Public, and vi) Physics teachers in High Schools. The programmes, tools and best practices are adjusted to suit each target group. The feedback from each group is highly positive. e.g. In February 2014, the author conducted in 3 Primary Schools the Interactive Show on ‘Science Is Fun’ to stimulate 290 students’ interest in Science – with lively and interesting demonstrations and experiments in a highly interactive way, using dramatization, story-telling and dancing. The feedback: 47% found the Show ‘Exciting’ and 51% found it ‘Interesting’ – totaling an impressive 98%. When asked to describe the Show in their own words, the leading 4 responses were: ‘Fun’ (26%), ‘Interesting’ (20%), ‘Exciting’ (14%) and ‘Educational’ (10%) – confirming that ‘fun’ & ‘education’ can go together. The success of Physics Outreach in Jamaica verifies the following words of Chodos, Associate Executive Officer of the American Physical Society: “If we could get members to go to K-12 schools and levitate a magnet or something, we really think these efforts would bring great rewards.”Keywords: physics education, physics popularization, UWI, Jamaica
Procedia PDF Downloads 410794 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 315793 Assessment of Rural Youth Adoption of Cassava Production Technologies in Southwestern Nigeria
Authors: J. O. Ayinde, S. O. Olatunji
Abstract:
This study assessed rural youth adoption of cassava production technologies in Southwestern, Nigeria. Specifically, it examine the level of awareness and adoption of cassava production technologies by rural youth, determined the extent of usage of cassava production technologies available to the rural youth, examined constrains to the adoption of cassava production technologies by youth and suggested possible solutions. Multistage sampling procedure was adopted for the study. In the first stage, two states were purposively selected in southwest, Nigeria which are Osun and Oyo states due to high level of cassava production and access to cassava production technology in the areas. In the second stage, purposive sampling technique was used to select two local governments each from the states selected which are Ibarapa central (Igbo-Ora) and Ibarapa East (Eruwa) Local Government Areas (LGAs) in Oyo state; and Ife North (Ipetumodu) and Ede South (Oke Ireesi) LGAs in Osun State. In the third stage, proportionate sampling technique was used to randomly select five, four, six and four communities from the selected LGAs respectively representing 20 percent of the rural communities in them, in all 19 communities were selected. In the fourth stage, Snow ball sampling technique was used to select about 7 rural youths in each community selected to make a total of 133 respondents. Validated structured interview schedule was used to elicit information from the respondents. The data collected were analyzed using both descriptive and inferential statistics to summarize and test the hypotheses of the study. The results show that the average age of rural youths participating in cassava production in the study area is 29 ± 2.6 years and 60 percent aged between 30 and 35 years. Also, more male (67.4 %) were involved in cassava production than females (32.6 %). The result also reveals that the average size of farm land of the respondents is 2.5 ± 0.3 hectares. Also, more male (67.4 %) were involved in cassava production than females (32.6 %). Also, extent of usage of the technologies (r = 0.363, p ≤ 0.01) shows significant relationship with level of adoption of the technologies. Household size (b = 0.183; P ≤ 0.01) and membership of social organizations were significant at 0.01 (b = 0.331; P ≤ 0.01) while age was significant at 0.10 (b = 0.097; P ≤ 0.05). On the other hand 0.01, years of residence (b = - 0.063; P ≤ 0.01) and income (b = - 0.204; P ≤ 0.01) had negative values and implies that a unit increase in each of these variables would decrease extent of usage of the Cassava production technologies. It was concluded that the extent of usage of the technologies in the communities will affect the rate of adoption positively and this will change the negative perception of youths on cassava production thereby ensure food security in the study area.Keywords: assessment, rural youths’, Cassava production technologies, agricultural production, food security
Procedia PDF Downloads 208792 Risks of Investment in the Development of Its Personnel
Authors: Oksana Domkina
Abstract:
According to the modern economic theory, human capital became one of the main production factors and the most promising direction of investment, as such investment provides opportunity of obtaining high and long-term economic and social effects. Informational technology (IT) sector is the representative of this new economy which is most dependent on human capital as the main competitive factor. So the question for this sector is not whether investment in development of personal should be made, but what are the most effective ways of executing it and who has to pay for the education: Worker, company or government. In this paper we examine the IT sector, describe the labor market of IT workers and its development, and analyze the risks that IT companies may face if they invest in the development of their workers and what factors influence it. The main problem and difficulty of quantitative estimation of risk of investment in human capital of a company and its forecasting is human factor. Human behavior is often unpredictable and complex, so it requires specific approaches and methods of assessment. To build a comprehensive method of estimation of the risk of investment in human capital of a company considering human factor, we decided to use the method of analytic hierarchy process (AHP), that initially was created and developed. We separated three main group of factors: Risks related to the worker, related to the company, and external factors. To receive data for our research, we conducted a survey among the HR departments of Ukrainian IT companies used them as experts for the AHP method. Received results showed that IT companies mostly invest in the development of their workers, although several hire only already qualified personnel. According to the results, the most significant risks are the risk of ineffective training and the risk of non-investment that are both related to the firm. The analysis of risk factors related to the employee showed that, the factors of personal reasons, motivation, and work performance have almost the same weights of importance. Regarding internal factors of the company, there is a high role of the factor of compensation and benefits, factors of interesting projects, team, and career opportunities. As for the external environment, one of the most dangerous factor of risk is competitor activities, meanwhile the political and economical situation factor also has a relatively high weight, which is easy to explain by the influence of severe crisis in Ukraine during 2014-2015. The presented method allows to take into consideration all main factors that affect the risk of investment in human capital of a company. This gives a base for further research in this field and allows for a creation of a practical framework for making decisions regarding the personnel development strategy and specific employees' development plans for the HR departments.Keywords: risks, personnel development, investment in development, factors of risk, risk of investment in development, IT, analytic hierarchy process, AHP
Procedia PDF Downloads 301791 Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder
Authors: Jessica Scher Lisa, Eric Shyman
Abstract:
Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD.Keywords: autism spectrum disorder, neurology, visual processing, weak coherence
Procedia PDF Downloads 130790 Enhancing Photocatalytic Hydrogen Production: Modification of TiO₂ by Coupling with Semiconductor Nanoparticles
Authors: Saud Hamdan Alshammari
Abstract:
Photocatalytic water splitting to produce hydrogen (H₂) has obtained significant attention as an environmentally friendly technology. This process, which produces hydrogen from water and sunlight, represents a renewable energy source. Titanium dioxide (TiO₂) plays a critical role in photocatalytic hydrogen production due to its chemical stability, availability, and low cost. Nevertheless, TiO₂'s wide band gap (3.2 eV) limits its visible light absorption and might affect the effectiveness of the photocatalytic. Coupling TiO₂ with other semiconductors is a strategy that can enhance TiO₂ by narrowing its band gap and improving visible light absorption. This paper studies the modification of TiO₂ by coupling it with another semiconductor such as CdS nanoparticles using a reflux reactor and autoclave reactor that helps form a core-shell structure. Characterization techniques, including TEM and UV-Vis spectroscopy, confirmed successful coating of TiO₂ on CdS core, reduction of the band gap from 3.28 eV to 3.1 eV, and enhanced light absorption in the visible region. These modifications are attributed to the heterojunction structure between TiO₂ and CdS.The essential goal of this study is to improve TiO₂ for use in photocatalytic water splitting to enhance hydrogen production. The core-shell TiO₂@CdS nanoparticles exhibited promising results, due to band gap narrowing and improved light absorption. Future work will involve adding Pt as a co-catalyst, which is known to increase surface reaction activity by enhancing proton adsorption. Evaluation of the TiO₂@CdS@Pt catalyst will include performance assessments and hydrogen productivity tests, considering factors such as effective shapes and material ratios. Moreover, the study could be enhanced by studying further modifications to the catalyst and displaying additional performance evaluations. For instance, doping TiO₂ with metals such as nickel (Ni), iron (Fe), and cobalt (Co) and non-metals such as nitrogen (N), carbon (C), and sulfur (S) could positively influence the catalyst by reducing the band gap, enhancing the separation of photogenerated electron-hole pairs, and increasing the surface area, respectively. Additionally, to further improve catalytic performance, examining different catalyst morphologies, such as nanorods, nanowires, and nanosheets, in hydrogen production could be highly beneficial. Optimizing photoreactor design for efficient photon delivery and illumination will further enhance the photocatalytic process. These strategies collectively aim to overcome current challenges and improve the efficiency of hydrogen production via photocatalysis.Keywords: hydrogen production, photocatalytic, water spliiting, semiconductor, nanoparticles
Procedia PDF Downloads 23789 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait
Authors: Obaid AlOtaibi, Salman Hussain
Abstract:
Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.Keywords: Kuwait, renewable energy, spatial analysis, wind energy
Procedia PDF Downloads 151788 Development of a Process Method to Manufacture Spreads from Powder Hardstock
Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien
Abstract:
It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders
Procedia PDF Downloads 90787 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton
Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla
Abstract:
An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton
Procedia PDF Downloads 138786 Virtual Team Performance: A Transactive Memory System Perspective
Authors: Belbaly Nassim
Abstract:
Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination
Procedia PDF Downloads 174785 Power Recovery from Waste Air of Mine Ventilation Fans Using Wind Turbines
Authors: Soumyadip Banerjee, Tanmoy Maity
Abstract:
The recovery of power from waste air generated by mine ventilation fans presents a promising avenue for enhancing energy efficiency in mining operations. This abstract explores the feasibility and benefits of utilizing turbine generators to capture the kinetic energy present in waste air and convert it into electrical power. By integrating turbine generator systems into mine ventilation infrastructures, the potential to harness and utilize the previously untapped energy within the waste air stream is realized. This study examines the principles underlying turbine generator technology and its application within the context of mine ventilation systems. The process involves directing waste air from ventilation fans through specially designed turbines, where the kinetic energy of the moving air is converted into rotational motion. This mechanical energy is then transferred to connected generators, which convert it into electrical power. The recovered electricity can be employed for various on-site applications, including powering mining equipment, lighting, and control systems. The benefits of power recovery from waste air using turbine generators are manifold. Improved energy efficiency within the mining environment results in reduced dependence on external power sources and associated cost savings. Additionally, this approach contributes to environmental sustainability by utilizing a previously wasted resource for power generation. Resource conservation is further enhanced, aligning with modern principles of sustainable mining practices. However, successful implementation requires careful consideration of factors such as waste air characteristics, turbine design, generator efficiency, and integration into existing mine infrastructure. Maintenance and monitoring protocols are necessary to ensure consistent performance and longevity of the turbine generator systems. While there is an initial investment associated with equipment procurement, installation, and integration, the long-term benefits of reduced energy costs and environmental impact make this approach economically viable. In conclusion, the recovery of power from waste air from mine ventilation fans using turbine generators offers a tangible solution to enhance energy efficiency and sustainability within mining operations. By capturing and converting the kinetic energy of waste air into usable electrical power, mines can optimize resource utilization, reduce operational costs, and contribute to a greener future for the mining industry.Keywords: waste to energy, wind power generation, exhaust air, power recovery
Procedia PDF Downloads 37784 Development and Structural Characterization of a Snack Food with Added Type 4 Extruded Resistant Starch
Authors: Alberto A. Escobar Puentes, G. Adriana García, Luis F. Cuevas G., Alejandro P. Zepeda, Fernando B. Martínez, Susana A. Rincón
Abstract:
Snack foods are usually classified as ‘junk food’ because have little nutritional value. However, due to the increase on the demand and third generation (3G) snacks market, low price and easy to prepare, can be considered as carriers of compounds with certain nutritional value. Resistant starch (RS) is classified as a prebiotic fiber it helps to control metabolic problems and has anti-cancer colon properties. The active compound can be developed by chemical cross-linking of starch with phosphate salts to obtain a type 4 resistant starch (RS4). The chemical reaction can be achieved by extrusion, a process widely used to produce snack foods, since it's versatile and a low-cost procedure. Starch is the major ingredient for snacks 3G manufacture, and the seeds of sorghum contain high levels of starch (70%), the most drought-tolerant gluten-free cereal. Due to this, the aim of this research was to develop a snack (3G), with RS4 in optimal conditions extrusion (previously determined) from sorghum starch, and carry on a sensory, chemically and structural characterization. A sample (200 g) of sorghum starch was conditioned with 4% sodium trimetaphosphate/ sodium tripolyphosphate (99:1) and set to 28.5% of moisture content. Then, the sample was processed in a single screw extruder equipped with rectangular die. The inlet, transport and output temperatures were 60°C, 134°C and 70°C, respectively. The resulting pellets were expanded in a microwave oven. The expansion index (EI), penetration force (PF) and sensory analysis were evaluated in the expanded pellets. The pellets were milled to obtain flour and RS content, degree of substitution (DS), and percentage of phosphorus (% P) were measured. Spectroscopy [Fourier Transform Infrared (FTIR)], X-ray diffraction, differential scanning calorimetry (DSC) and scanning electron microscopy (SEM) analysis were performed in order to determine structural changes after the process. The results in 3G were as follows: RS, 17.14 ± 0.29%; EI, 5.66 ± 0.35 and PF, 5.73 ± 0.15 (N). Groups of phosphate were identified in the starch molecule by FTIR: DS, 0.024 ± 0.003 and %P, 0.35±0.15 [values permitted as food additives (<4 %P)]. In this work an increase of the gelatinization temperature after the crosslinking of starch was detected; the loss of granular and vapor bubbles after expansion were observed by SEM; By using X-ray diffraction, loss of crystallinity was observed after extrusion process. Finally, a snack (3G) was obtained with RS4 developed by extrusion technology. The sorghum starch was efficient for snack 3G production.Keywords: extrusion, resistant starch, snack (3G), Sorghum
Procedia PDF Downloads 312783 The Effects of Lighting Environments on the Perception and Psychology of Consumers of Different Genders in a 3C Retail Store
Authors: Yu-Fong Lin
Abstract:
The main purpose of this study is to explore the impact of different lighting arrangements that create different visual environments in a 3C retail store on the perception, psychology, and shopping tendencies of consumers of different genders. In recent years, the ‘emotional shopping’ model has been widely accepted in the consumer market; in addition to the emotional meaning and value of a product, the in-store ‘shopping atmosphere’ has also been increasingly regarded as significant. The lighting serves as an important environmental stimulus that influences the atmosphere of a store. Altering the lighting can change the color, the shape, and the atmosphere of a space. A successful retail lighting design can not only attract consumers’ attention and generate their interest in various goods, but it can also affect consumers’ shopping approach, behavior, and desires. 3C electronic products have become mainstream in the current consumer market. Consumers of different genders may demonstrate different behaviors and preferences within a 3C store environment. This study tests the impact of a combination of lighting contrasts and color temperatures in a 3C retail store on the visual perception and psychological reactions of consumers of different genders. The research design employs an experimental method to collect data from subjects and then uses statistical analysis adhering to a 2 x 2 x 2 factorial design to identify the influences of different lighting environments. This study utilizes virtual reality technology as the primary method by which to create four virtual store lighting environments. The four lighting conditions are as follows: high contrast/cool tone, high contrast/warm tone, low contrast/cool tone, and low contrast/warm tone. Differences in the virtual lighting and the environment are used to test subjects’ visual perceptions, emotional reactions, store satisfaction, approach-avoidance intentions, and spatial atmosphere preferences. The findings of our preliminary test indicate that female subjects have a higher pleasure response than male subjects in a 3C retail store. Based on the findings of our preliminary test, the researchers modified the contents of the questionnaires and the virtual 3C retail environment with different lighting conditions in order to conduct the final experiment. The results will provide information about the effects of retail lighting on the environmental psychology and the psychological reactions of consumers of different genders in a 3C retail store lighting environment. These results will enable useful practical guidelines about creating 3C retail store lighting and atmosphere for retailers and interior designers to be established.Keywords: 3C retail store, environmental stimuli, lighting, virtual reality
Procedia PDF Downloads 391782 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning
Authors: Jasmin Cowin, Amany Alkhayat
Abstract:
This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality
Procedia PDF Downloads 241781 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation
Procedia PDF Downloads 245