Search results for: vision sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2445

Search results for: vision sensor

285 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 308
284 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 212
283 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 197
282 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics

Procedia PDF Downloads 80
281 Internet of Things in Higher Education: Implications for Students with Disabilities

Authors: Scott Hollier, Ruchi Permvattana

Abstract:

The purpose of this abstract is to share the findings of a recently completed disability-related Internet of Things (IoT) project undertaken at Curtin University in Australia. The project focused on identifying how IoT could support people with disabilities with their educational outcomes. To achieve this, the research consisted of an analysis of current literature and interviews conducted with students with vision, hearing, mobility and print disabilities. While the research acknowledged the ability to collect data with IoT is now a fairly common occurrence, its benefits and applicability still need to be grounded back into real-world applications. Furthermore, it is important to consider if there are sections of our society that may benefit from these developments and if those benefits are being fully realised in a rush by large companies to achieve IoT dominance for their particular product or digital ecosystem. In this context, it is important to consider a group which, to our knowledge, has had little specific mainstream focus in the IoT area –people with disabilities. For people with disabilities, the ability for every device to interact with us and with each other has the potential to yield significant benefits. In terms of engagement, the arrival of smart appliances is already offering benefits such as the ability for a person in a wheelchair to give verbal commands to an IoT-enabled washing machine if the buttons are out of reach, or for a blind person to receive a notification on a smartphone when dinner has finished cooking in an IoT-enabled microwave. With clear benefits of IoT being identified for people with disabilities, it is important to also identify what implications there are for education. With higher education being a critical pathway for many people with disabilities in finding employment, the question as to whether such technologies can support the educational outcomes of people with disabilities was what ultimately led to this research project. This research will discuss several significant findings that have emerged from the research in relation to how consumer-based IoT can be used in the classroom to support the learning needs of students with disabilities, how industrial-based IoT sensors and actuators can be used to monitor and improve the real-time learning outcomes for the delivery of lectures and student engagement, and a proposed method for students to gain more control over their learning environment. The findings shared in this presentation are likely to have significant implications for the use of IoT in the classroom through the implementation of affordable and accessible IoT solutions and will provide guidance as to how policies can be developed as the implications of both benefits and risks continue to be considered by educators.

Keywords: disability, higher education, internet of things, students

Procedia PDF Downloads 119
280 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 57
279 Integrated Performance Management System a Conceptual Design for PT. XYZ

Authors: Henrie Yunianto, Dermawan Wibisono

Abstract:

PT. XYZ is a family business (private company) in Indonesia that provide an educational program and consultation services. Since its establishment in 2011, the company has run without any strategic management system implemented. Though the company could survive until now. The management of PT. XYZ sees the business opportunity for such product is huge, even though the targeted market is very specific (niche), the volume is large (due to large population of Indonesia) and numbers of competitors are low (now). It can be said if the product life cycle is in between ‘Introduction stage’ and ‘growth’ stage. It is observed that nowadays the new entrants (competitors) are increasing, thus PT. XYZ consider reacting in facing the intense business rivalry by conducting the business in an appropriate manner. A Performance Management System is important to be implemented in accordance with the business sustainability and growth. The framework of Performance Management System chosen is Integrated Performance Management System (IPMS). IPMS framework has the advantages of its simplicity, linkage between its business variables and indicators where the company can see the connections between all factors measured. IPMS framework consists of perspectives: (1) Business Result, (2) Internal Processes, (3) Resource Availability. Variables and indicators were examined through deep analysis of the business external and internal environments, Strength-Weakness-Opportunity-Threat (SWOT) analysis, Porter’s five forces analysis. Analytical Hierarchy Process (AHP) analysis was then used to quantify the weight of each variable/indicators. AHP is needed since in this study, PT. XYZ, the data of existing performance indicator was not available. Later, where the IPMS is implemented, the real data measured can be examined to determine the weight factor of each indicators using correlation analysis (or other methods). In this study of IPMS design for PT. XYZ, the analysis shows that with current company goals, along with the AHP methodology, the critical indicators for each perspective are: (1) Business results: Customer satisfaction and Employee satisfaction, (2) Internal process: Marketing performance, Supplier quality, Production quality, Continues improvement; (3) Resources Availability: Leadership and company culture & value, Personal Competences, Productivity. Company and/or organization require performance management system to help them in achieving their vision and mission. Company strategy will be effectively defined and addressed by using performance management system. Integrated Performance Management System (IPMS) framework and AHP analysis help us in quantifying the factors which influence the business output expected.

Keywords: analytical hierarchy process, business strategy, differentiation strategy, integrated performance management system

Procedia PDF Downloads 307
278 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 404
277 Wind Energy Harvester Based on Triboelectricity: Large-Scale Energy Nanogenerator

Authors: Aravind Ravichandran, Marc Ramuz, Sylvain Blayac

Abstract:

With the rapid development of wearable electronics and sensor networks, batteries cannot meet the sustainable energy requirement due to their limited lifetime, size and degradation. Ambient energies such as wind have been considered as an attractive energy source due to its copious, ubiquity, and feasibility in nature. With miniaturization leading to high-power and robustness, triboelectric nanogenerator (TENG) have been conceived as a promising technology by harvesting mechanical energy for powering small electronics. TENG integration in large-scale applications is still unexplored considering its attractive properties. In this work, a state of the art design TENG based on wind venturi system is demonstrated for use in any complex environment. When wind introduces into the air gap of the homemade TENG venturi system, a thin flexible polymer repeatedly contacts with and separates from electrodes. This device structure makes the TENG suitable for large scale harvesting without massive volume. Multiple stacking not only amplifies the output power but also enables multi-directional wind utilization. The system converts ambient mechanical energy to electricity with 400V peak voltage by charging of a 1000mF super capacitor super rapidly. Its future implementation in an array of applications aids in environment friendly clean energy production in large scale medium and the proposed design performs with an exhaustive material testing. The relation between the interfacial micro-and nano structures and the electrical performance enhancement is comparatively studied. Nanostructures are more beneficial for the effective contact area, but they are not suitable for the anti-adhesion property due to the smaller restoring force. Considering these issues, the nano-patterning is proposed for further enhancement of the effective contact area. By considering these merits of simple fabrication, outstanding performance, robust characteristic and low-cost technology, we believe that TENG can open up great opportunities not only for powering small electronics, but can contribute to large-scale energy harvesting through engineering design being complementary to solar energy in remote areas.

Keywords: triboelectric nanogenerator, wind energy, vortex design, large scale energy

Procedia PDF Downloads 213
276 Urban River As Living Infrastructure: Tidal Flooding And Sea Level Rise In A Working Waterway In Hampton Roads, Virginia

Authors: William Luke Hamel

Abstract:

Existing conceptions of urban flooding caused by tidal fluctuations and sea-level rise have been inadequately conceptualized by metrics of resilience and methods of flow modeling. While a great deal of research has been devoted to the effects of urbanization on pluvial flooding, the kind of tidal flooding experienced by locations like Hampton Roads, Virginia, has not been adequately conceptualized as being a result of human factors such as urbanization and gray infrastructure. Resilience from sea level rise and its associated flooding has been pioneered in the region with the 2015 Norfolk Resilience Plan from 100 Resilient Cities as well as the 2016 Norfolk Vision 2100 plan, which envisions different patterns of land use for the city. Urban resilience still conceptualizes the city as having the ability to maintain an equilibrium in the face of disruptions. This economic and social equilibrium relies on the Elizabeth River, narrowly conceptualized. Intentionally or accidentally, the river was made to be a piece of infrastructure. Its development was meant to serve the docks, shipyards, naval yards, and port infrastructure that gives the region so much of its economic life. Inasmuch as it functions to permit the movement of cargo; the raising and lowering of ships to be repaired, commissioned, or decommissioned; or the provisioning of military vessels, the river as infrastructure is functioning properly. The idea that the infrastructure is malfunctioning when high tides and sea-level rise create flooding is predicated on the idea that the infrastructure is truly a human creation and can be controlled. The natural flooding cycles of an urban river, combined with the action of climate change and sea-level rise, are only abnormal so much as they encroach on the development that first encroached on the river. The urban political ecology of water provides the ability to view the river as an infrastructural extension of urban networks while also calling for its emancipation from stationarity and human control. Understanding the river and city as a hydrosocial territory or as a socio-natural system liberates both actors from the duality of the natural and the social while repositioning river flooding as a normal part of coexistence on a floodplain. This paper argues for the adoption of an urban political ecology lens in the analysis and governance of urban rivers like the Elizabeth River as a departure from the equilibrium-seeking and stability metrics of urban resilience.

Keywords: urban flooding, political ecology, Elizabeth river, Hampton roads

Procedia PDF Downloads 168
275 Muhammad`s Vision of Interaction with Supernatural Beings According to the Hadith in Comparison to Parallels of Other Cultures

Authors: Vladimir A. Rozov

Abstract:

Comparative studies of religion and ritual could contribute better understanding of human culture universalities. Belief in supernatural beings seems to be a common feature of the religion. A significant part of the Islamic concepts that concern supernatural beings is based on a tradition based on the Hadiths. They reflect, among other things, his ideas about a proper way to interact with supernatural beings. These ideas to a large extent follow from the pre-Islamic religious experience of the Arabs and had been reflected in a number of ritual actions. Some of those beliefs concern a particular function of clothing. For example, it is known that Muhammad was wrapped in clothes during the revelation of the Quran. The same thing was performed by pre-Islamic soothsayers (kāhin) and by rival opponents of Muhammad during their trances. Muhammad also turned the clothes inside out during religious rituals (prayer for rain). Besides these specific ways of clothing which prove the external similarity of Muhammad with the soothsayers and other people who claimed the connection with supernatural forces, the pre-Islamic soothsayers had another characteristic feature which is physical flaws. In this regard, it is worth to note Muhammad's so-called "Seal the Prophecy" (h̠ ātam an- nubūwwa) -protrusion or outgrowth on his back. Another interesting feature of Muhammad's behavior was his attitude to eating onion and garlic. In particular, the Prophet didn`t eat them and forbade people who had tasted these vegetables to enter mosques, until the smell ceases to be felt. The reason for this ban on eating onion and garlic is caused by a belief that the smell of these products prevents communication with otherworldly forces. The materials of the Hadith also suggest that Muhammad shared faith in the apotropical properties of water. Both of these ideas have parallels in other cultures of the world. Muhammad's actions supposed to provide an interaction with the supernatural beings are not accidental. They have parallels in the culture of pre-Islamic Arabia as well as in many past and present world cultures. The latter fact can be explained by the similarity of the universal human beliefs in supernatural beings and how they should be interacted with. Later a number of similar ideas shared by the Prophet Muhammad was legitimized by the Islamic tradition and formed the basis of popular Islamic rituals. Thus, these parallels emphasize the commonality of human notions of supernatural beings and also demonstrate the significance of the pre-Islamic cultural context in analyzing the genesis of Islamic religious beliefs.

Keywords: hadith, Prophet Muhammad, ritual, supernatural beings

Procedia PDF Downloads 389
274 Ecological Crisis: A Buddhist Approach

Authors: Jaharlal Debbarma

Abstract:

The ecological crisis has become a threat to earth’s well-being. Man’s ambitious desire of wealth, pleasure, fame, longevity and happiness has extracted natural resources so vastly that it is unable to sustain a healthy life. Man’s greed for wealth and power has caused the setting up of vast factories which further created the problem of air, water and noise pollution, which have adversely affected both fauna and flora.It is no secret that man uses his inherent powers of reason, intelligence and creativity to change his environment for his advantage. But man is not aware that the moral force he himself creates brings about corresponding changes in his environment to his weal or woe whether he likes it or not. As we are facing the global warming and the nature’s gift such as air and water has been so drastically polluted with disastrous consequences that man seek for a ways and means to overcome all this pollution problem as his health and life sustainability has been threaten and that is where man try to question about the moral ethics and value.It is where Buddhist philosophy has been emphasized deeply which gives us hope for overcoming this entire problem as Buddha himself emphasized in eradicating human suffering and Buddhism is the strongest form of humanism we have. It helps us to learn to live with responsibility, compassion, and loving kindness.It teaches us to be mindful in our action and thought as the environment unites every human being. If we fail to save it we will perish. If we can rise to meet the need to all which ecology binds us - humans, other species, other everything will survive together.My paper will look into the theory of Dependent Origination (Pratītyasamutpāda), Buddhist understanding of suffering (collective suffering), and Non-violence (Ahimsa) and an effort will be made to provide a new vision to Buddhist ecological perspective. The above Buddhist philosophy will be applied to ethical values and belief systems of modern society. The challenge will be substantially to transform the modern individualistic and consumeristic values. The stress will be made on the interconnectedness of the nature and the relation between human and planetary sustainability. In a way environmental crisis will be referred to “spiritual crisis” as A. Gore (1992) has pointed out. The paper will also give important to global consciousness, as well as to self-actualization and self-fulfillment. In the words of Melvin McLeod “Only when we combine environmentalism with spiritual practice, will we find the tools to make the profound personal transformations needed to address the planetary crisis?”

Keywords: dependent arising, collective ecological suffering, remediation, Buddhist approach

Procedia PDF Downloads 266
273 Sound Selection for Gesture Sonification and Manipulation of Virtual Objects

Authors: Benjamin Bressolette, S´ebastien Denjean, Vincent Roussarie, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland-Martinet

Abstract:

New sensors and technologies – such as microphones, touchscreens or infrared sensors – are currently making their appearance in the automotive sector, introducing new kinds of Human-Machine Interfaces (HMIs). The interactions with such tools might be cognitively expensive, thus unsuitable for driving tasks. It could for instance be dangerous to use touchscreens with a visual feedback while driving, as it distracts the driver’s visual attention away from the road. Furthermore, new technologies in car cockpits modify the interactions of the users with the central system. In particular, touchscreens are preferred to arrays of buttons for space improvement and design purposes. However, the buttons’ tactile feedback is no more available to the driver, which makes such interfaces more difficult to manipulate while driving. Gestures combined with an auditory feedback might therefore constitute an interesting alternative to interact with the HMI. Indeed, gestures can be performed without vision, which means that the driver’s visual attention can be totally dedicated to the driving task. In fact, the auditory feedback can both inform the driver with respect to the task performed on the interface and on the performed gesture, which might constitute a possible solution to the lack of tactile information. As audition is a relatively unused sense in automotive contexts, gesture sonification can contribute to reducing the cognitive load thanks to the proposed multisensory exploitation. Our approach consists in using a virtual object (VO) to sonify the consequences of the gesture rather than the gesture itself. This approach is motivated by an ecological point of view: Gestures do not make sound, but their consequences do. In this experiment, the aim was to identify efficient sound strategies, to transmit dynamic information of VOs to users through sound. The swipe gesture was chosen for this purpose, as it is commonly used in current and new interfaces. We chose two VO parameters to sonify, the hand-VO distance and the VO velocity. Two kinds of sound parameters can be chosen to sonify the VO behavior: Spectral or temporal parameters. Pitch and brightness were tested as spectral parameters, and amplitude modulation as a temporal parameter. Performances showed a positive effect of sound compared to a no-sound situation, revealing the usefulness of sounds to accomplish the task.

Keywords: auditory feedback, gesture sonification, sound perception, virtual object

Procedia PDF Downloads 302
272 Fundamental Study on Reconstruction of 3D Image Using Camera and Ultrasound

Authors: Takaaki Miyabe, Hideharu Takahashi, Hiroshige Kikura

Abstract:

The Government of Japan and Tokyo Electric Power Company Holdings, Incorporated (TEPCO) are struggling with the decommissioning of Fukushima Daiichi Nuclear Power Plants, especially fuel debris retrieval. In fuel debris retrieval, amount of fuel debris, location, characteristics, and distribution information are important. Recently, a survey was conducted using a robot with a small camera. Progress report in remote robot and camera research has speculated that fuel debris is present both at the bottom of the Pressure Containment Vessel (PCV) and inside the Reactor Pressure Vessel (RPV). The investigation found a 'tie plate' at the bottom of the containment, this is handles on the fuel rod. As a result, it is assumed that a hole large enough to allow the tie plate to fall is opened at the bottom of the reactor pressure vessel. Therefore, exploring the existence of holes that lead to inside the RCV is also an issue. Investigations of the lower part of the RPV are currently underway, but no investigations have been made inside or above the PCV. Therefore, a survey must be conducted for future fuel debris retrieval. The environment inside of the RPV cannot be imagined due to the effect of the melted fuel. To do this, we need a way to accurately check the internal situation. What we propose here is the adaptation of a technology called 'Structure from Motion' that reconstructs a 3D image from multiple photos taken by a single camera. The plan is to mount a monocular camera on the tip of long-arm robot, reach it to the upper part of the PCV, and to taking video. Now, we are making long-arm robot that has long-arm and used at high level radiation environment. However, the environment above the pressure vessel is not known exactly. Also, fog may be generated by the cooling water of fuel debris, and the radiation level in the environment may be high. Since camera alone cannot provide sufficient sensing in these environments, we will further propose using ultrasonic measurement technology in addition to cameras. Ultrasonic sensor can be resistant to environmental changes such as fog, and environments with high radiation dose. these systems can be used for a long time. The purpose is to develop a system adapted to the inside of the containment vessel by combining a camera and an ultrasound. Therefore, in this research, we performed a basic experiment on 3D image reconstruction using a camera and ultrasound. In this report, we select the good and bad condition of each sensing, and propose the reconstruction and detection method. The results revealed the strengths and weaknesses of each approach.

Keywords: camera, image processing, reconstruction, ultrasound

Procedia PDF Downloads 104
271 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)

Authors: Ali Pourkazemi

Abstract:

The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.

Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies

Procedia PDF Downloads 69
270 Virtual Metrology for Copper Clad Laminate Manufacturing

Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho

Abstract:

In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.

Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology

Procedia PDF Downloads 350
269 Hydrogen Sulfide Releasing Ibuprofen Derivative Can Protect Heart After Ischemia-Reperfusion

Authors: Virag Vass, Ilona Bereczki, Erzsebet Szabo, Nora Debreczeni, Aniko Borbas, Pal Herczegh, Arpad Tosaki

Abstract:

Hydrogen sulfide (H₂S) is a toxic gas, but it is produced by certain tissues in a small quantity. According to earlier studies, ibuprofen and H₂S has a protective effect against damaging heart tissue caused by ischemia-reperfusion. Recently, we have been investigating the effect of a new water-soluble H₂S releasing ibuprofen molecule administered after artificially generated ischemia-reperfusion on isolated rat hearts. The H₂S releasing property of the new ibuprofen derivative was investigated in vitro in medium derived from heart endothelial cell isolation at two concentrations. The ex vivo examinations were carried out on rat hearts. Rats were anesthetized with an intraperitoneal injection of ketamine, xylazine, and heparin. After thoracotomy, hearts were excised and placed into ice-cold perfusion buffer. Perfusion of hearts was conducted in Langendorff mode via the cannulated aorta. In our experiments, we studied the dose-effect of the H₂S releasing molecule in Langendorff-perfused hearts with the application of gradually increasing concentration of the compound (0- 20 µM). The H₂S releasing ibuprofen derivative was applied before the ischemia for 10 minutes. H₂S concentration was measured with an H₂S detecting electrochemical sensor from the coronary effluent solution. The 10 µM concentration was chosen for further experiments when the treatment with this solution was occurred after the ischemia. The release of H₂S is occurred by the hydrolyzing enzymes that are present in the heart endothelial cells. The protective effect of the new H₂S releasing ibuprofen molecule can be confirmed by the infarct sizes of hearts using the Triphenyl-tetrazolium chloride (TTC) staining method. Furthermore, we aimed to define the effect of the H₂S releasing ibuprofen derivative on autophagic and apoptotic processes in damaged hearts after investigating the molecular markers of these events by western blotting and immunohistochemistry techniques. Our further studies will include the examination of LC3I/II, p62, Beclin1, caspase-3, and other apoptotic molecules. We hope that confirming the protective effect of new H₂S releasing ibuprofen molecule will open a new possibility for the development of more effective cardioprotective agents with exerting fewer side effects. Acknowledgment: This study was supported by the grants of NKFIH- K-124719 and the European Union and the State of Hungary co- financed by the European Social Fund in the framework of GINOP- 2.3.2-15-2016-00043.

Keywords: autophagy, hydrogen sulfide, ibuprofen, ischemia, reperfusion

Procedia PDF Downloads 140
268 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions

Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini

Abstract:

This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.

Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing

Procedia PDF Downloads 146
267 Interactive Garments: Flexible Technologies for Textile Integration

Authors: Anupam Bhatia

Abstract:

Upon reviewing the literature and the pragmatic work done in the field of E- textiles, it is observed that the applications of wearable technologies have found a steady growth in the field of military, medical, industrial, sports; whereas fashion is at a loss to know how to treat this technology and bring it to market. The purpose of this paper is to understand the practical issues of integration of electronics in garments; cutting patterns for mass production, maintaining the basic properties of textiles and daily maintenance of garments that hinder the wide adoption of interactive fabric technology within Fashion and leisure wear. To understand the practical hindrances an experimental and laboratory approach is taken. “Techno Meets Fashion” has been an interactive fashion project where sensor technologies have been embedded with textiles that result in set of ensembles that are light emitting garments, sound sensing garments, proximity garments, shape memory garments etc. Smart textiles, especially in the form of textile interfaces, are drastically underused in fashion and other lifestyle product design. Clothing and some other textile products must be washable, which subjects to the interactive elements to water and chemical immersion, physical stress, and extreme temperature. The current state of the art tends to be too fragile for this treatment. The process for mass producing traditional textiles becomes difficult in interactive textiles. As cutting patterns from larger rolls of cloth and sewing them together to make garments breaks and reforms electronic connections in an uncontrolled manner. Because of this, interactive fabric elements are integrated by hand into textiles produced by standard methods. The Arduino has surely made embedding electronics into textiles much easier than before; even then electronics are not integral to the daily wear garments. Soft and flexible interfaces of MEMS (micro sensors and Micro actuators) can be an option to make this possible by blending electronics within E-textiles in a way that’s seamless and still retains functions of the circuits as well as the garment. Smart clothes, which offer simultaneously a challenging design and utility value, can be only mass produced if the demands of the body are taken care of i.e. protection, anthropometry, ergonomics of human movement, thermo- physiological regulation.

Keywords: ambient intelligence, proximity sensors, shape memory materials, sound sensing garments, wearable technology

Procedia PDF Downloads 393
266 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 118
265 Challenges of School Leadership

Authors: Stefan Ninković

Abstract:

The main purpose of this paper is to examine the different theoretical approaches and relevant empirical evidence and thus, recognize some of the most pressing challenges faced by school leaders. This paper starts from the fact that the new mission of the school is characterized by the need for stronger coordination among students' academic, social and emotional learning. In this sense, school leaders need to focus their commitment, vision and leadership on the issues of students' attitudes, language, cultural and social background, and sexual orientation. More specifically, they should know what a good teaching is for student’s at-risk, students whose first language is not dominant in school, those who’s learning styles are not in accordance with usual teaching styles, or who are stigmatized. There is a rather wide consensus around the fact that the traditionally popular concept of instructional leadership of the school principal is no longer sufficient. However, in a number of "pro-leadership" circles, including certain groups of academic researchers, consultants and practitioners, there is an established tendency of attributing school principal an extraordinary influence towards school achievements. On the other hand, the situation in which all employees in the school are leaders is a utopia par excellence. Although leadership obviously can be efficiently distributed across the school, there are few findings that speak about sources of this distribution and factors making it sustainable. Another idea that is not particularly new, but has only recently gained in importance is related to the fact that the collective capacity of the school is an important resource that often remains under-cultivated. To understand the nature and power of collaborative school cultures, it is necessary to know that these operate in a way that they make their all collective members' tacit knowledge explicit. In this sense, the question is how leaders in schools can shape collaborative culture and create social capital in the school. Pressure exerted on schools to systematically collect and use the data has been accompanied by the need for school leaders to develop new competencies. The role of school leaders is critical in the process of assessing what data are needed and for what purpose. Different types of data are important: test results, data on student’s absenteeism, satisfaction with school, teacher motivation, etc. One of the most important tasks of school leaders are data-driven decision making as well as ensuring transparency of the decision-making process. Finally, the question arises whether the existing models of school leadership are compatible with the current social and economic trends. It is necessary to examine whether and under what conditions schools are in need for forms of leadership that are different from those that currently prevail. Closely related to this issue is also to analyze the adequacy of different approaches to leadership development in the school.

Keywords: educational changes, leaders, leadership, school

Procedia PDF Downloads 336
264 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 150
263 Design and Control of a Brake-by-Wire System Using a Permanent Magnet Synchronous Motor

Authors: Daniel S. Gamba, Marc Sánchez, Javier Pérez, Juan J. Castillo, Juan A. Cabrera

Abstract:

The conventional hydraulic braking system operates through the activation of a master cylinder and solenoid valves that distribute and regulate brake fluid flow, adjusting the pressure at each wheel to prevent locking during sudden braking. However, in recent years, there has been a significant increase in the integration of electronic units into various vehicle control systems. In this context, one of the technologies most recently researched is the Brake-by-wire system, which combines electronic, hydraulic, and mechanical technologies to manage braking. This proposal introduces the design and control of a Brake-by-wire system, which will be part of a fully electric and teleoperated vehicle. This vehicle will have independent four-wheel drive, braking, and steering systems. The vehicle will be operated by embedded controllers programmed into a Speedgoat test system, which allows programming through Simulink and real-time capabilities. The braking system comprises all mechanical and electrical components, a vehicle control unit (VCU), and an electronic control unit (ECU). The mechanical and electrical components include a permanent magnet synchronous motor from Odrive and its inverter, the mechanical transmission system responsible for converting torque into pressure, and the hydraulic system that transmits this pressure to the brake caliper. The VCU is responsible for controlling the pressure and communicates with the other components through the CAN protocol, minimizing response times. The ECU, in turn, transmits the information obtained by a sensor installed in the caliper to the central computer, enabling the control loop to continuously regulate pressure by controlling the motor's speed and current. To achieve this, tree controllers are used, operating in a nested configuration for effective control. Since the computer allows programming in Simulink, a digital model of the braking system has been developed in Simscape, which makes it possible to reproduce different operating conditions, faithfully simulate the performance of alternative brake control systems, and compare the results with data obtained in various real tests. These tests involve evaluating the system's response to sinusoidal and square wave inputs at different frequencies, with the results compared to those obtained from conventional braking systems.

Keywords: braking, CAN protocol, permanent magnet motor, pressure control

Procedia PDF Downloads 19
262 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments

Authors: Rohit Dey, Sailendra Karra

Abstract:

This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.

Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems

Procedia PDF Downloads 137
261 Multi-Analyte Indium Gallium Zinc Oxide-Based Dielectric Electrolyte-Insulator-Semiconductor Sensing Membranes

Authors: Chyuan Haur Kao, Hsiang Chen, Yu Sheng Tsai, Chen Hao Hung, Yu Shan Lee

Abstract:

Dielectric electrolyte-insulator-semiconductor sensing membranes-based biosensors have been intensively investigated because of their simple fabrication, low cost, and fast response. However, to enhance their sensing performance, it is worthwhile to explore alternative materials, distinct processes, and novel treatments. An ISFET can be viewed as a variation of MOSFET with the dielectric oxide layer as the sensing membrane. Then, modulation on the work function of the gate caused by electrolytes in various ion concentrations could be used to calculate the ion concentrations. Recently, owing to the advancement of CMOS technology, some high dielectric materials substrates as the sensing membranes of electrolyte-insulator-semiconductor (EIS) structures. The EIS with a stacked-layer of SiO₂ layer between the sensing membrane and the silicon substrate exhibited a high pH sensitivity and good long-term stability. IGZO is a wide-bandgap (~3.15eV) semiconductor of the III-VI semiconductor group with several preferable properties, including good transparency, high electron mobility, wide band gap, and comparable with CMOS technology. IGZO was sputtered by reactive radio frequency (RF) on a p-type silicon wafer with various gas ratios of Ar:O₂ and was treated with rapid thermal annealing in O₂ ambient. The sensing performance, including sensitivity, hysteresis, and drift rate was measured and XRD, XPS, and AFM analyses were also used to study the material properties of the IGZO membrane. Moreover, IGZO was used as a sensing membrane in dielectric EIS bio-sensor structures. In addition to traditional pH sensing capability, detection for concentrations of Na+, K+, urea, glucose, and creatinine was performed. Moreover, post rapid thermal annealing (RTA) treatment was confirmed to improve the material properties and enhance the multi-analyte sensing capability for various ions or chemicals in solutions. In this study, the IGZO sensing membrane with annealing in O₂ ambient exhibited a higher sensitivity, higher linearity, higher H+ selectivity, lower hysteresis voltage and lower drift rate. Results indicate that the IGZO dielectric sensing membrane on the EIS structure is promising for future bio-medical device applications.

Keywords: dielectric sensing membrane, IGZO, hydrogen ion, plasma, rapid thermal annealing

Procedia PDF Downloads 251
260 Intergenerational Succession within Family Businesses: The Role of Sharing and Creation Knowledge

Authors: Wissal Ben Arfi, Jean-Michel Sahut

Abstract:

The purpose of this paper is to provide a deeper understanding of the succession process from a knowledge management perspective. By doing that, succession process in family businesses, as an environment for creating and sharing knowledge, was explored. Design/Methodology/Approach: To support our reasoning, we collected qualitative data through 16 in-depth interviews conducted with all decision makers involved in the family businesses succession process in France. These open-ended responses were subsequently exposed to thematic discourse analysis. Findings: Central to this exhibit is the nature and magnitude of knowledge creation and sharing among the actors within the family succession context and how can tacit knowledge sharing facilitate the succession process. We also identified factors that inhibit down the knowledge creation and sharing processes. The sharing and creation of knowledge among members of a family business appear to be a complex process that must be part of a strategy for change. This implies that it requests trust and takes a certain amount of time because it requires organizational change and a clear and coherent strategic vision that is accepted and assimilated by all the members. Professional and leadership skills are of particular importance in knowledge sharing and creation processes. In most cases, tacit knowledge is crucial when it is shared and accumulated collectively. Our findings reveal that managers should find ways of implementing knowledge sharing and creation processes while acknowledging the succession process within family firms. This study highlights the importance of generating knowledge strategies in order to enhance the performance and the success of intergenerational succession. The empirical outcomes contribute to enrich the field of succession management process and enhance the role of knowledge in shaping family performance and longevity. To a large extent, the lessons learned from the study of succession processes in family-owned businesses are that when there is a deliberate effort to introduce a knowledge-based approach, this action becomes a seminal event in the life of the organization. Originality/Value: The paper contributes to the deep understanding of interactions among actors by examining the knowledge creation and sharing processes since current researches in family succession focused on aspects such as personal development of potential, intra-family succession intention, decision-making processes in family businesses. Besides, as succession is one of the key factors that determine the longevity and the performance of family businesses, it also contributes to literature by examining how tacit knowledge is transferred, shared and created in family businesses and how this can facilitate the intergenerational succession process.

Keywords: family-owned businesses, succession process, knowledge, performance

Procedia PDF Downloads 208
259 Establishing Community-Based Pro-Biodiversity Enterprise in the Philippines: A Climate Change Adaptation Strategy towards Agro-Biodiversity Conservation and Local Green Economic Development

Authors: Dina Magnaye

Abstract:

In the Philippines, the performance of the agricultural sector is gauged through crop productivity and returns from farm production rather than the biodiversity in the agricultural ecosystem. Agricultural development hinges on the overall goal of increasing productivity through intensive agriculture, monoculture system, utilization of high yielding varieties in plants, and genetic upgrading in animals. This merits an analysis of the role of agro-biodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. These enterprises conserve biodiversity while equitably sharing production income in the utilization of biological resources. The study aims to determine how community-based pro-biodiversity enterprises become instrumental in local climate change adaptation and agro-biodiversity conservation as input to local green economic development planning. It also involves an assessment of the role of agrobiodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. The perceptions of the local community members both in urban and upland rural areas on community-based pro-biodiversity enterprises were evaluated. These served as a basis in developing a planning modality that can be mainstreamed in the management of local green economic enterprises to benefit the environment, provide local income opportunities, conserve species diversity, and sustain environment-friendly farming systems and practices. The interviews conducted with organic farmer-owners, entrepreneur-organic farmers, and organic farm workers revealed that pro-biodiversity enterprise such as organic farming involved the cyclic use of natural resources within the carrying capacity of a farm; recognition of the value of tradition and culture especially in the upland rural area; enhancement of socio-economic capacity; conservation of ecosystems in harmony with nature; and climate change mitigation. The suggested planning modality for community-based pro-biodiversity enterprises for a green economy encompasses four (4) phases to include community resource or capital asset profiling; stakeholder vision development; strategy formulation for sustained enterprises; and monitoring and evaluation.

Keywords: agro-biodiversity, agro-biodiversity conservation, local green economy, organic farming, pro-biodiversity enterprise

Procedia PDF Downloads 362
258 Exploring the Concept of Fashion Waste: Hanging by a Thread

Authors: Timothy Adam Boleratzky

Abstract:

The goal of this transformative endeavour lies in the repurposing of textile scraps, heralding a renaissance in the creation of wearable art. Through a judicious fusion of Life Cycle Assessment (LCA) methodologies and cutting-edge techniques, this research embarks upon a voyage of exploration, unraveling the intricate tapestry of environmental implications woven into the fabric of textile waste. Delving deep into the annals of empirical evidence and scholarly discourse, the study not only elucidates the urgent imperative for waste reduction strategies but also unveils the transformative potential inherent in embracing circular economy principles within the hallowed halls of fashion. As the research unfurls its sails, guided by the compass of sustainability, it traverses uncharted territories, charting a course toward a more enlightened and responsible fashion ecosystem. The canvas upon which this journey unfolds is richly adorned with insights gleaned from the crucible of experimentation, laying bare the myriad pathways toward waste minimisation and resource optimisation. From the adoption of recycling strategies to the cultivation of eco-friendly production techniques, the research endeavours to sculpt a blueprint for a more sustainable future, one stitch at a time. In this unfolding narrative, the role of wearable art emerges as a potent catalyst for change, transcending the boundaries of conventional fashion to embrace a more holistic ethos of sustainability. Through the alchemy of creativity and craftsmanship, discarded textile scraps are imbued with new life, morphing into exquisite creations that serve as both a testament to human ingenuity and a rallying cry for environmental preservation. Each thread, each stitch, becomes a silent harbinger of change, weaving together a tapestry of hope in a world besieged by ecological uncertainty. As the research journey culminates, its echoes resonate far beyond the confines of academia, reverberating through the corridors of industry and beyond. In its wake, it leaves a legacy of empowerment and enlightenment, inspiring a generation of designers, entrepreneurs, and consumers to embrace a more sustainable vision of fashion. For in the intricate interplay of threads and textiles lies the promise of a brighter, more resilient future, where beauty coexists harmoniously with responsibility and where fashion becomes not merely an expression of style but a celebration of sustainability.

Keywords: fabric-manipulation, sustainability, textiles, waste, wearable-art

Procedia PDF Downloads 41
257 Hibiscus Sabdariffa Extracts: A Sustainable and Eco-Friendly Resource for Multifunctional Cellulosic Fibers

Authors: Mohamed Rehan, Gamil E. Ibrahim, Mohamed S. Abdel-Aziz, Shaimaa R. Ibrahim, Tawfik A. Khattab

Abstract:

The utilization of natural products in finishing textiles toward multifunctional applications without side effects is an extremely motivating goal. Hibiscus sabdariffa usually has been used for many traditional medicine applications. To develop an additional use for Hibiscus sabdariffa, an extraction of bioactive compounds from Hibiscus sabdariffa followed by finishing on cellulosic fibers was designed to cleaner production of the value-added textiles fibers with multifunctional applications. The objective of this study is to explore, identify, and evaluate the bioactive compound extracted from Hibiscus sabdariffa by different solvent via ultrasonic technique as a potential eco-friendly agent for multifunctional cellulosic fabrics via two approaches. In the first approach, Hibiscus sabdariffa extract was used as a source of sustainable eco-friendly for simultaneous coloration and multi-finishing of cotton fabrics via in situ incorporations of nanoparticles (silver and metal oxide). In the second approach, the micro-capsulation of Hibiscus sabdariffa extracts was followed by coating onto cotton gauze to introduce multifunctional healthcare applications. The effect of the solvent type was accelerated by ultrasonic on the phytochemical, antioxidant, and volatile compounds of Hibiscus sabdariffa. The surface morphology and elemental content of the treated fabrics were explored using Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), and energy-dispersive X-ray spectroscopy (EDX). The multifunctional properties of treated fabrics, including coloration, sensor properties and protective properties against pathogenic microorganisms and UV radiation as well as wound healing property were evaluated. The results showed that the water, as well as ethanol/water, was selected as a solvent for the extraction of natural compounds from Hibiscus Sabdariffa with high in extract yield, total phenolic contents, flavonoid contents, and antioxidant activity. These natural compounds were utilized to enhance cellulosic fibers functionalization by imparting faint/dark red color, antimicrobial against different organisms, and antioxidants as well as UV protection properties. The encapsulation of Hibiscus Sabdariffa extracts, as well as wound healing, is under consideration and evaluation. As a result, the current study presents a sustainable and eco-friendly approach to design cellulosic fabrics for multifunctional medical and healthcare applications.

Keywords: cellulosic fibers, Hibiscus sabdariffa extract, multifunctional application, nanoparticles

Procedia PDF Downloads 145
256 Synthesis of High-Antifouling Ultrafiltration Polysulfone Membranes Incorporating Low Concentrations of Graphene Oxide

Authors: Abdulqader Alkhouzaam, Hazim Qiblawey, Majeda Khraisheh

Abstract:

Membrane treatment for desalination and wastewater treatment is one of the promising solutions to affordable clean water. It is a developing technology throughout the world and considered as the most effective and economical method available. However, the limitations of membranes’ mechanical and chemical properties restrict their industrial applications. Hence, developing novel membranes was the focus of most studies in the water treatment and desalination sector to find new materials that can improve the separation efficiency while reducing membrane fouling, which is the most important challenge in this field. Graphene oxide (GO) is one of the materials that have been recently investigated in the membrane water treatment sector. In this work, ultrafiltration polysulfone (PSF) membranes with high antifouling properties were synthesized by incorporating different loadings of GO. High-oxidation degree GO had been synthesized using a modified Hummers' method. The synthesized GO was characterized using different analytical techniques including elemental analysis, Fourier transform infrared spectroscopy - universal attenuated total reflectance sensor (FTIR-UATR), Raman spectroscopy, and CHNSO elemental analysis. CHNSO analysis showed a high oxidation degree of GO represented by its oxygen content (50 wt.%). Then, ultrafiltration PSF membranes incorporating GO were fabricated using the phase inversion technique. The prepared membranes were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and showed a clear effect of GO on PSF physical structure and morphology. The water contact angle of the membranes was measured and showed better hydrophilicity of GO membranes compared to pure PSF caused by the hydrophilic nature of GO. Separation properties of the prepared membranes were investigated using a cross-flow membrane system. Antifouling properties were studied using bovine serum albumin (BSA) and humic acid (HA) as model foulants. It has been found that GO-based membranes exhibit higher antifouling properties compared to pure PSF. When using BSA, the flux recovery ratio (FRR %) increased from 65.4 ± 0.9 % for pure PSF to 84.0 ± 1.0 % with a loading of 0.05 wt.% GO in PSF. When using HA as model foulant, FRR increased from 87.8 ± 0.6 % to 93.1 ± 1.1 % with 0.02 wt.% of GO in PSF. The pure water permeability (PWP) decreased with loadings of GO from 181.7 L.m⁻².h⁻¹.bar⁻¹ of pure PSF to 181.1, and 157.6 L.m⁻².h⁻¹.bar⁻¹ with 0.02 and 0.05 wt.% GO respectively. It can be concluded from the obtained results that incorporating low loading of GO could enhance the antifouling properties of PSF hence improving its lifetime and reuse.

Keywords: antifouling properties, GO based membranes, hydrophilicity, polysulfone, ultrafiltration

Procedia PDF Downloads 143