Search results for: multi sensor image fusion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8217

Search results for: multi sensor image fusion

4497 Discourse Analysis and Semiotic Researches: Using Michael Halliday's Sociosemiotic Theory

Authors: Deyu Yuan

Abstract:

Discourse analysis as an interdisciplinary approach has more than 60-years-history since it was first named by Zellig Harris in 'Discourse Analysis' on Language in 1952. Ferdinand de Saussure differentiated the 'parole' from the 'langue' that established the principle of focusing on language but not speech. So the rising of discourse analysis can be seen as a discursive turn for the entire language research that closely related to the theory of Speech act. Critical discourse analysis becomes the mainstream of contemporary language research through drawing upon M. A. K. Halliday's socio-semiotic theory and Foucault, Barthes, Bourdieu's views on the sign, discourse, and ideology. So in contrast to general semiotics, social semiotics mainly focuses on parole and the application of semiotic theories to some applicable fields. The article attempts to discuss this applicable sociosemiotics and show the features of it that differ from the Saussurian and Peircian semiotics in four aspects: 1) the sign system is about meaning-generation resource in the social context; 2) the sign system conforms to social and cultural changes with the form of metaphor and connotation; 3) sociosemiotics concerns about five applicable principles including the personal authority principle, non-personal authority principle, consistency principle, model demonstration principle, the expertise principle to deepen specific communication; 4) the study of symbolic functions is targeted to the characteristics of ideational, interpersonal and interactional function in social communication process. Then the paper describes six features which characterize this sociosemiotics as applicable semiotics: social, systematic, usable interdisciplinary, dynamic, and multi-modal characteristics. Thirdly, the paper explores the multi-modal choices of sociosemiotics in the respects of genre, discourse, and style. Finally, the paper discusses the relationship between theory and practice in social semiotics and proposes a relatively comprehensive theoretical framework for social semiotics as applicable semiotics.

Keywords: discourse analysis, sociosemiotics, pragmatics, ideology

Procedia PDF Downloads 351
4496 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 129
4495 Hydrological Response of the Glacierised Catchment: Himalayan Perspective

Authors: Sonu Khanal, Mandira Shrestha

Abstract:

Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.

Keywords: Glacier, Glacier melt, Snowmelt, Energy balance

Procedia PDF Downloads 455
4494 Impact of a Structured Antimicrobial Stewardship Program in a North-East Italian Hospital

Authors: Antonio Marco Miotti, Antonella Ruffatto, Giampaola Basso, Antonio Madia, Giulia Zavatta, Emanuela Salvatico, Emanuela Zilli

Abstract:

A National Action Plan to fight antimicrobial resistance was launched in Italy in 2017. In order to reduce inappropriate exposure to antibiotics and infections from multi-drug resistant bacteria, it is essential to set up a structured system of surveillance and monitoring of the implementation of National Action Plan standards, including antimicrobial consumption, with a special focus on quinolones, third generation cephalosporins and carbapenems. A quantitative estimate of antibiotic consumption (defined daily dose - DDD - consumption per 100 days of hospitalization) has been provided by the Pharmaceutical Service to the Hospital of Cittadella, ULSS 6 Euganea – Health Trust (District of Padua) for the years 2019 (before the pandemic), 2020 and 2021 for all classes of antibiotics. Multidisciplinary meetings have been organized monthly by the local Antimicrobial Stewardship Group. Between 2019 and 2021, an increase in the consumption of carbapenems in the Intensive Care Unit (from 12.2 to 18.2 DDD, + 49.2%) and a decrease in Medical wards (from 5.3 to 2.6 DDD, - 50.9%) was reported; a decrease in the consumption of quinolones in Intensive Care Unit (from 17.2 to 10.8 DDD, - 37.2%), Medical wards (from 10.5 to 6.6 DDD, - 37.1%) and Surgical wards (from 10.2 to 9.3 DDD, - 8.8%) was highlighted; an increase in the consumption of third generation cephalosporins in Medical wards (from 18.1 to 22.6 DDD, + 24,1%) was reported. Finally, after an increase in the consumption of macrolides between 2020 and 2019, in 2021, a decrease was reported in the Intensive Care Unit (DDD: 8.0 in 2019, 18.0 in 2020, 6.4 in 2021) and Medical wards (DDD: 9.0 in 2019, 13.7 in 2020, 10.9 in 2021). Constant monitoring of antimicrobial consumption and timely identifying of warning situations that may need a specific intervention are the cornerstone of Antimicrobial Stewardship programs, together with analysing data on bacterial resistance rates and infections from multi-drug resistant bacteria.

Keywords: carbapenems, quinolones, antimicrobial, stewardship

Procedia PDF Downloads 158
4493 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling

Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari

Abstract:

A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.

Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis

Procedia PDF Downloads 147
4492 Multi-Walled Carbon Nanotubes as Nucleating Agents

Authors: Rabindranath Jana, Plabani Basu, Keka Rana

Abstract:

Nucleating agents are widely used to modify the properties of various polymers. The rate of crystallization and the size of the crystals have a strong impact on mechanical and optical properties of a polymer. The addition of nucleating agents to the semi-crystalline polymers provides a surface on which the crystal growth can start easily. As a consequence, fast crystal formation will result in many small crystal domains so that the cycle times for injection molding may be reduced. Moreover, the mechanical properties e.g., modulus, tensile strength, heat distortion temperature and hardness may increase. In the present work, multi-walled carbon nanotubes (MWNTs) as nucleating agents for the crystallization of poly (e-caprolactone)diol (PCL). Thus nanocomposites of PCL filled with MWNTs were prepared by solution blending. Differential scanning calorimetry (DSC) tests were carried out to study the effect of CNTs on on-isothermal crystallization of PCL. The polarizing optical microscopy (POM), and wide-angle X-ray diffraction (WAXD) were used to study the morphology and crystal structure of PCL and its nanocomposites. It is found that MWNTs act as effective nucleating agents that significantly shorten the induction period of crystallization and however, decrease the crystallization rate of PCL, exhibiting a remarkable decrease in the Avrami exponent n, surface folding energy σe and crystallization activation energy ΔE. The carbon-based fillers act as templates for hard block chains of PCL to form an ordered structure on the surface of nanoparticles during the induction period, bringing about some increase in equilibrium temperature. The melting process of PCL and its nanocomposites are also studied; the nanocomposites exhibit two melting peaks at higher crystallization temperature which mainly refer to the melting of the crystals with different crystal sizes however, PCL shows only one melting temperature.

Keywords: poly(e-caprolactone)diol, multiwalled carbon nanotubes, composite materials, nonisothermal crystallization, crystal structure, nucleation

Procedia PDF Downloads 496
4491 Reshaping of Indian Education System with the Help of Multi-Media: Promises and Pitfalls

Authors: Geetu Gahlawat

Abstract:

The education system accustomed information on daily basis in term of variety i.e Multimedia channel. This can create a challenge to pedagogue to get hold on learner. Multimedia enhance the education system with its technology. Educators deliver their content effectively and beyond any limit through multimedia elements on another side it gives easy learning to learners and they are able to get their goals fast. This paper gives an overview of how multimedia reshape the Indian education system with its promises and pitfalls.

Keywords: multimedia, technology, techniques, development, pedagogy

Procedia PDF Downloads 281
4490 Collective Problem Solving: Tackling Obstacles and Unlocking Opportunities for Young People Not in Education, Employment, or Training

Authors: Kalimah Ibrahiim, Israa Elmousa

Abstract:

This study employed the world café method alongside semi-structured interviews within a 'conversation café' setting to engage stakeholders from the public health and primary care sectors. The objective was to collaboratively explore strategies to improve outcomes for young people not in education, employment, or training (NEET). The discussions were aimed at identifying the underlying causes of disparities faced by NEET individuals, exchanging experiences, and formulating community-driven solutions to bolster preventive efforts and shape policy initiatives. A thematic analysis of the qualitative data gathered emphasized the importance of community problem-solving through the exchange of ideas and reflective discussions. Healthcare professionals reflected on their potential roles, pinpointing a significant gap in understanding the specific needs of the NEET population and the unclear distribution of responsibilities among stakeholders. The results underscore the necessity for a unified approach in primary care and the fostering of multi-agency collaborations that focus on addressing social determinants of health. Such strategies are critical not only for the immediate improvement of health outcomes for NEET individuals but also for informing broader policy decisions that can have long-term benefits. Further research is ongoing, delving deeper into the unique challenges faced by this demographic and striving to develop more effective interventions. The study advocates for continued efforts to integrate insights from various sectors to create a more holistic and effective response to the needs of the NEET population, ensuring that future strategies are informed by a comprehensive understanding of their circumstances and challenges.

Keywords: multi-agency working, primary care, public health, social inequalities

Procedia PDF Downloads 39
4489 Study on Wireless Transmission for Reconnaissance UAV with Wireless Sensor Network and Cylindrical Array of Microstrip Antennas

Authors: Chien-Chun Hung, Chun-Fong Wu

Abstract:

It is important for a commander to have real-time information to aware situations and to make decision in the battlefield. Results of modern technique developments have brought in this kind of information for military purposes. Unmanned aerial vehicle (UAV) is one of the means to gather intelligence owing to its widespread applications. It is still not clear whether or not the mini UAV with short-range wireless transmission system is used as a reconnaissance system in Taiwanese. In this paper, previous experience on the research of the sort of aerial vehicles has been applied with a data-relay system using the ZigBee modulus. The mini UAV developed is expected to be able to collect certain data in some appropriate theaters. The omni-directional antenna with high gain is also integrated into mini UAV to fit the size-reducing trend of airborne sensors. Two advantages are so far obvious. First, mini UAV can fly higher than usual to avoid being attacked from ground fires. Second, the data will be almost gathered during all maneuvering attitudes.

Keywords: mini UAV, reconnaissance, wireless transmission, ZigBee modulus

Procedia PDF Downloads 194
4488 Protection of Steel Bars in Reinforce Concrete with Zinc Based Coverings

Authors: Hamed Rajabzadeh Gatabi, Soroush Dastgheibifard, Mahsa Asnafi

Abstract:

There is no doubt that reinforced concrete is known as one of the most significant materials which is used in construction industry for many years. Although, some natural elements in dealing with environment can contribute to its corrosion or failure. One of which is bar or so-called reinforcement failure. So as to combat this problem, one of the oxidization prevention methods investigated was the barrier protection method implemented over the application of an organic coating, specifically fusion-bonded epoxy. In this study comparative method is prepared on two different kinds of covered bars (zinc-riches epoxy and polyamide epoxy coated bars) and also uncoated bar. With the aim of evaluate these reinforced concretes, the stickiness, toughness, thickness and corrosion performance of coatings were compared by some tools like Cu/CuSo4 electrodes, EIS and etc. Different types of concretes were exposed to the salty environment (NaCl 3.5%) and their durability was measured. As stated by the experiments in research and investigations, thick coatings (named epoxies) have acceptable stickiness and strength. Polyamide epoxy coatings stickiness to the bars was a bit better than that of zinc-rich epoxy coatings; nonetheless it was stiffer than the zinc rich epoxy coatings. Conversely, coated bars with zinc-rich epoxy showed more negative oxidization potentials, which take revenge protection of bars by zinc particles. On the whole, zinc-rich epoxy coverings is more corrosion-proof than polyamide epoxy coatings due to consuming zinc elements and some other parameters, additionally if the epoxy coatings without surface defects are applied on the rebar surface carefully, it can be said that the life of steel structures is subjected to increase dramatically.

Keywords: surface coating, epoxy polyamide, reinforce concrete bars, salty environment

Procedia PDF Downloads 289
4487 Use of Numerical Tools Dedicated to Fire Safety Engineering for the Rolling Stock

Authors: Guillaume Craveur

Abstract:

This study shows the opportunity to use numerical tools dedicated to Fire Safety Engineering for the Rolling Stock. Indeed, some lawful requirements can now be demonstrated by using numerical tools. The first part of this study presents the use of modelling evacuation tool to satisfy the criteria of evacuation time for the rolling stock. The buildingEXODUS software is used to model and simulate the evacuation of rolling stock. Firstly, in order to demonstrate the reliability of this tool to calculate the complete evacuation time, a comparative study was achieved between a real test and simulations done with buildingEXODUS. Multiple simulations are performed to capture the stochastic variations in egress times. Then, a new study is done to calculate the complete evacuation time of a train with the same geometry but with a different interior architecture. The second part of this study shows some applications of Computational Fluid Dynamics. This work presents the approach of a multi scales validation of numerical simulations of standardized tests with Fire Dynamics Simulations software developed by the National Institute of Standards and Technology (NIST). This work highlights in first the cone calorimeter test, described in the standard ISO 5660, in order to characterize the fire reaction of materials. The aim of this process is to readjust measurement results from the cone calorimeter test in order to create a data set usable at the seat scale. In the second step, the modelisation concerns the fire seat test described in the standard EN 45545-2. The data set obtained thanks to the validation of the cone calorimeter test was set up in the fire seat test. To conclude with the third step, after controlled the data obtained for the seat from the cone calorimeter test, a larger scale simulation with a real part of train is achieved.

Keywords: fire safety engineering, numerical tools, rolling stock, multi-scales validation

Procedia PDF Downloads 303
4486 Optimal Peer-to-Peer On-Orbit Refueling Mission Planning with Complex Constraints

Authors: Jing Yu, Hongyang Liu, Dong Hao

Abstract:

On-Orbit Refueling is of great significance in extending space crafts' lifetime. The problem of minimum-fuel, time-fixed, Peer-to-Peer On-Orbit Refueling mission planning is addressed here with the particular aim of assigning fuel-insufficient satellites to the fuel-sufficient satellites and optimizing each rendezvous trajectory. Constraints including perturbation, communication link, sun illumination, hold points for different rendezvous phases, and sensor switching are considered. A planning model has established as well as a two-level solution method. The upper level deals with target assignment based on fuel equilibrium criterion, while the lower level solves constrained trajectory optimization using special maneuver strategies. Simulations show that the developed method could effectively resolve the Peer-to-Peer On-Orbit Refueling mission planning problem and deal with complex constraints.

Keywords: mission planning, orbital rendezvous, on-orbit refueling, space mission

Procedia PDF Downloads 226
4485 Detection of Flood Prone Areas Using Multi Criteria Evaluation, Geographical Information Systems and Fuzzy Logic. The Ardas Basin Case

Authors: Vasileiou Apostolos, Theodosiou Chrysa, Tsitroulis Ioannis, Maris Fotios

Abstract:

The severity of extreme phenomena is due to their ability to cause severe damage in a small amount of time. It has been observed that floods affect the greatest number of people and induce the biggest damage when compared to the total of annual natural disasters. The detection of potential flood-prone areas constitutes one of the fundamental components of the European Natural Disaster Management Policy, directly connected to the European Directive 2007/60. The aim of the present paper is to develop a new methodology that combines geographical information, fuzzy logic and multi-criteria evaluation methods so that the most vulnerable areas are defined. Therefore, ten factors related to geophysical, morphological, climatological/meteorological and hydrological characteristics of the basin were selected. Afterwards, two models were created to detect the areas pronest to flooding. The first model defined the gravitas of each factor using Analytical Hierarchy Process (AHP) and the final map of possible flood spots were created using GIS and Boolean Algebra. The second model made use of the fuzzy logic and GIS combination and a respective map was created. The application area of the aforementioned methodologies was in Ardas basin due to the frequent and important floods that have taken place these last years. Then, the results were compared to the already observed floods. The result analysis shows that both models can detect with great precision possible flood spots. As the fuzzy logic model is less time-consuming, it is considered the ideal model to apply to other areas. The said results are capable of contributing to the delineation of high risk areas and to the creation of successful management plans dealing with floods.

Keywords: analytical hierarchy process, flood prone areas, fuzzy logic, geographic information system

Procedia PDF Downloads 379
4484 Streamwise Vorticity in the Wake of a Sliding Bubble

Authors: R. O’Reilly Meehan, D. B. Murray

Abstract:

In many practical situations, bubbles are dispersed in a liquid phase. Understanding these complex bubbly flows is therefore a key issue for applications such as shell and tube heat exchangers, mineral flotation and oxidation in water treatment. Although a large body of work exists for bubbles rising in an unbounded medium, that of bubbles rising in constricted geometries has received less attention. The particular case of a bubble sliding underneath an inclined surface is common to two-phase flow systems. The current study intends to expand this knowledge by performing experiments to quantify the streamwise flow structures associated with a single sliding air bubble under an inclined surface in quiescent water. This is achieved by means of two-dimensional, two-component particle image velocimetry (PIV), performed with a continuous wave laser and high-speed camera. PIV vorticity fields obtained in a plane perpendicular to the sliding surface show that there is significant bulk fluid motion away from the surface. The associated momentum of the bubble means that this wake motion persists for a significant time before viscous dissipation. The magnitude and direction of the flow structures in the streamwise measurement plane are found to depend on the point on its path through which the bubble enters the plane. This entry point, represented by a phase angle, affects the nature and strength of the vortical structures. This study reconstructs the vorticity field in the wake of the bubble, converting the field at different instances in time to slices of a large-scale wake structure. This is, in essence, Taylor’s ”frozen turbulence” hypothesis. Applying this to the vorticity fields provides a pseudo three-dimensional representation from 2-D data, allowing for a more intuitive understanding of the bubble wake. This study provides insights into the complex dynamics of a situation common to many engineering applications, particularly shell and tube heat exchangers in the nucleate boiling regime.

Keywords: bubbly flow, particle image velocimetry, two-phase flow, wake structures

Procedia PDF Downloads 377
4483 Isolation Enhancement of Compact Dual-Band Printed Multiple Input Multiple Output Antenna for WLAN Applications

Authors: Adham M. Salah, Tariq A. Nagem, Raed A. Abd-Alhameed, James M. Noras

Abstract:

Recently, the demand for wireless communications systems to cover more than one frequency band (multi-band) with high data rate has been increased for both fixed and mobile services. Multiple Input Multiple Output (MIMO) technology is one of the significant solutions for attaining these requirements and to achieve the maximum channel capacity of the wireless communications systems. The main issue associated with MIMO antennas especially in portable devices is the compact space between the radiating elements which leads to limit the physical separation between them. This issue exacerbates the performance of the MIMO antennas by increasing the mutual coupling between the radiating elements. In other words, the mutual coupling will be stronger if the radiating elements of the MIMO antenna are closer. This paper presents a low–profile dual-band (2×1) MIMO antenna that works at 2.4GHz, 5.3GHz and 5.8GHz for wireless local area networks (WLAN) applications. A neutralization line (NL) technique for enhancing the isolation has been used by introducing a strip line with a length of λg/4 at the isolation frequency (2.4GHz) between the radiating elements. The overall dimensions of the antenna are 33.5 x 36 x 1.6 mm³. The fabricated prototype shows a good agreement between the simulated and measured results. The antenna impedance bandwidths are 2.38–2.75 GHz and 4.4–6 GHz for the lower and upper band respectively; the reflection coefficient and mutual coupling are better than -25 dB in both lower and higher bands. The MIMO antenna performance characteristics are reported in terms of the scattering parameters, envelope correlation coefficient (ECC), total active reflection coefficient, capacity loss, antenna gain, and radiation patterns. Analysis of these characteristics indicates that the design is appropriate for the WLAN terminal applications.

Keywords: ECC, neutralization line, MIMO antenna, multi-band, mutual coupling, WLAN

Procedia PDF Downloads 133
4482 Ionophore-Based Materials for Selective Optical Sensing of Iron(III)

Authors: Natalia Lukasik, Ewa Wagner-Wysiecka

Abstract:

Development of selective, fast-responsive, and economical sensors for diverse ions detection and determination is one of the most extensively studied areas due to its importance in the field of clinical, environmental and industrial analysis. Among chemical sensors, vast popularity has gained ionophore-based optical sensors, where the generated analytical signal is a consequence of the molecular recognition of ion by the ionophore. Change of color occurring during host-guest interactions allows for quantitative analysis and for 'naked-eye' detection without the need of using sophisticated equipment. An example of application of such sensors is colorimetric detection of iron(III) cations. Iron as one of the most significant trace elements plays roles in many biochemical processes. For these reasons, the development of reliable, fast, and selective methods of iron ions determination is highly demanded. Taking all mentioned above into account a chromogenic amide derivative of 3,4-dihydroxybenzoic acid was synthesized, and its ability to iron(III) recognition was tested. To the best of authors knowledge (according to chemical abstracts) the obtained ligand has not been described in the literature so far. The catechol moiety was introduced to the ligand structure in order to mimic the action of naturally occurring siderophores-iron(III)-selective receptors. The ligand–ion interactions were studied using spectroscopic methods: UV-Vis spectrophotometry and infrared spectroscopy. The spectrophotometric measurements revealed that the amide exhibits affinity to iron(III) in dimethyl sulfoxide and fully aqueous solution, what is manifested by the change of color from yellow to green. Incorporation of the tested amide into a polymeric matrix (cellulose triacetate) ensured effective recognition of iron(III) at pH 3 with the detection limit 1.58×10⁻⁵ M. For the obtained sensor material parameters like linear response range, response time, selectivity, and possibility of regeneration were determined. In order to evaluate the effect of the size of the sensing material on iron(III) detection nanospheres (in the form of nanoemulsion) containing the tested amide were also prepared. According to DLS (dynamic light scattering) measurements, the size of the nanospheres is 308.02 ± 0.67 nm. Work parameters of the nanospheres were determined and compared with cellulose triacetate-based material. Additionally, for fast, qualitative experiments the test strips were prepared by adsorption of the amide solution on a glass microfiber material. Visual limit of detection of iron(III) at pH 3 by the test strips was estimated at the level 10⁻⁴ M. In conclusion, reported here amide derived from 3,4- dihydroxybenzoic acid proved to be an effective candidate for optical sensing of iron(III) in fully aqueous solutions. N. L. kindly acknowledges financial support from National Science Centre Poland the grant no. 2017/01/X/ST4/01680. Authors thank for financial support from Gdansk University of Technology grant no. 032406.

Keywords: ion-selective optode, iron(III) recognition, nanospheres, optical sensor

Procedia PDF Downloads 154
4481 Application of UAS in Forest Firefighting for Detecting Ignitions and 3D Fuel Volume Estimation

Authors: Artur Krukowski, Emmanouela Vogiatzaki

Abstract:

The article presents results from the AF3 project “Advanced Forest Fire Fighting” focused on Unmanned Aircraft Systems (UAS)-based 3D surveillance and 3D area mapping using high-resolution photogrammetric methods from multispectral imaging, also taking advantage of the 3D scanning techniques from the SCAN4RECO project. We also present a proprietary embedded sensor system used for the detection of fire ignitions in the forest using near-infrared based scanner with weight and form factors allowing it to be easily deployed on standard commercial micro-UAVs, such as DJI Inspire or Mavic. Results from real-life pilot trials in Greece, Spain, and Israel demonstrated added-value in the use of UAS for precise and reliable detection of forest fires, as well as high-resolution 3D aerial modeling for accurate quantification of human resources and equipment required for firefighting.

Keywords: forest wildfires, surveillance, fuel volume estimation, firefighting, ignition detectors, 3D modelling, UAV

Procedia PDF Downloads 142
4480 Multi-Criterial Analysis: Potential Regions and Height of Wind Turbines, Rio de Janeiro, Brazil

Authors: Claudio L. M. Souza, Milton Erthal, Aldo Shimoya, Elias R. Goncalves, Igor C. Rangel, Allysson R. T. Tavares, Elias G. Figueira

Abstract:

The process of choosing a region for the implementation of wind farms involves factors such as the wind regime, economic viability, land value, topography, and accessibility. This work presents results obtained by multi-criteria decision analysis, and it establishes a hierarchy, regarding the installation of wind farms, among geopolicy regions in the state of ‘Rio de Janeiro’, Brazil: ‘Regiao Norte-RN’, ‘Regiao dos Lagos-RL’ and ‘Regiao Serrana-RS’. The wind regime map indicates only these three possible regions with an average annual wind speed of above of 6.0 m/s. The method applied was the Analytical Hierarchy Process-AHP, designed to prioritize and rank the three regions based on four criteria as follows: 1) potential of the site and average wind speeds of above 6.0 ms-¹, 2) average land value, 3) distribution and interconnection to electric network with the highest number of electricity stations, and 4) accessibility with proximity and quality of highways and flat topography. The values of energy generation were calculated for wind turbines 50, 75, and 100 meters high, considering the production of site (GWh/Km²) and annual production (GWh). The weight of each criterion was attributed by six engineers and by analysis of Road Map, the Map of the Electric System, the Map of Wind Regime and the Annual Land Value Report. The results indicated that in 'RS', the demand was estimated at 2,000 GWh, so a wind farm can operate efficiently in 50 m turbines. This region is mainly mountainous with difficult access and lower land value. With respect to ‘RL’, the wind turbines have to be installed at a height of 75 m high to reach a demand of 6,300 GWh. This region is very flat, with easy access, and low land value. Finally, the ‘NR’ was evaluated as very flat and with expensive lands. In this case, wind turbines with 100 m can reach an annual production of 19,000 GWh. In this Region, the coast area was classified as of greater logistic, productivity and economic potential.

Keywords: AHP, renewable energy, wind energy

Procedia PDF Downloads 151
4479 Flicker Detection with Motion Tolerance for Embedded Camera

Authors: Jianrong Wu, Xuan Fu, Akihiro Higashi, Zhiming Tan

Abstract:

CMOS image sensors with a rolling shutter are used broadly in the digital cameras embedded in mobile devices. The rolling shutter suffers the flicker artifacts from the fluorescent lamp, and it could be observed easily. In this paper, the characteristics of illumination flicker in motion case were analyzed, and two efficient detection methods based on matching fragment selection were proposed. According to the experimental results, our methods could achieve as high as 100% accuracy in static scene, and at least 97% in motion scene.

Keywords: illumination flicker, embedded camera, rolling shutter, detection

Procedia PDF Downloads 420
4478 LiDAR Based Real Time Multiple Vehicle Detection and Tracking

Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt

Abstract:

Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.

Keywords: lidar, segmentation, clustering, tracking

Procedia PDF Downloads 423
4477 An Inventory Management Model to Manage the Stock Level for Irregular Demand Items

Authors: Riccardo Patriarca, Giulio Di Gravio, Francesco Costantino, Massimo Tronci

Abstract:

An accurate inventory management policy acquires a crucial role in the several high-availability sectors. In these sectors, due to the high-cost of spares and backorders, an (S-1, S) replenishment policy is necessary for high-availability items. The policy enables the shipment of a substitute efficient item anytime the inventory size decreases by one. This policy can be modelled following the Multi-Echelon Technique for Recoverable Item Control (METRIC). The METRIC is a system-based technique that allows defining the optimum stock level in a multi-echelon network, adopting measures in line with the decision-maker’s perspective. The METRIC defines an availability-cost function with inventory costs and required service levels, using as inputs data about the demand trend, the supplying and maintenance characteristics of the network and the budget/availability constraints. The traditional METRIC relies on the hypothesis that a Poisson distribution well represents the demand distribution in case of items with a low failure rate. However, in this research, we will explore the effects of using a Poisson distribution to model the demand of low failure rate items characterized by an irregular demand trend. This characteristic of a demand is not included in the traditional METRIC formulation leading to the need of revising its traditional formulation. Using the CV (Coefficient of Variation) and ADI (Average inter-Demand Interval) classification, we will define the inherent flaws of Poisson-based METRIC for irregular demand items, defining an innovative ad hoc distribution which can better fit the irregular demands. This distribution will allow defining proper stock levels to reduce stocking and backorder costs due to the high irregularities in the demand trend. A case study in the aviation domain will clarify the benefits of this innovative METRIC approach.

Keywords: METRIC, inventory management, irregular demand, spare parts

Procedia PDF Downloads 347
4476 Development of a Low-Cost Smart Insole for Gait Analysis

Authors: S. M. Khairul Halim, Mojtaba Ghodsi, Morteza Mohammadzaheri

Abstract:

Gait analysis is essential for diagnosing musculoskeletal and neurological conditions. However, current methods are often complex and expensive. This paper introduces a methodology for analysing gait parameters using a smart insole with a built-in accelerometer. The system measures stance time, swing time, step count, and cadence and wirelessly transmits data to a user-friendly IoT dashboard for centralized processing. This setup enables remote monitoring and advanced data analytics, making it a versatile tool for medical diagnostics and everyday usage. Integration with IoT enhances the portability and connectivity of the device, allowing for secure, encrypted data access over the Internet. This feature supports telemedicine and enables personalized treatment plans tailored to individual needs. Overall, the approach provides a cost-effective (almost 25 GBP), accurate, and user-friendly solution for gait analysis, facilitating remote tracking and customized therapy.

Keywords: gait analysis, IoT, smart insole, accelerometer sensor

Procedia PDF Downloads 17
4475 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 267
4474 Shared Decision-Making in Holistic Healthcare: Integrating Evidence-Based Medicine and Values-Based Medicine

Authors: Ling-Lang Huang

Abstract:

Research Background: Historically, the evolution of medicine has not only aimed to extend life but has also inadvertently introduced suffering in the process of maintaining life, presenting a contemporary challenge. We must carefully assess the conflict between the length of life and the quality of living. Evidence-Based Medicine (EBM) exists primarily to ensure the quality of cures. However, EBM alone does not fulfill our ultimate medical goals; we must also evaluate Value-Based Medicine (VBM) to find the best treatment for patients. Research Methodology: We can attempt to integrate EBM with VBM. Within the five steps of EBM, the first three steps (Ask—Acquire—Appraise) focus on the physical aspect of humans. However, in the fourth and fifth steps (Apply—Assess), the focus shifts from the physical to applying evidence-based treatment to the patient and assessing its effectiveness, considering a holistic approach to the individual. To consider VBM for patients, we can divide the process into three steps: The first step is "awareness," recognizing that each patient inhabits a different life-world and possesses unique differences. The second step is "integration," akin to the hermeneutic concept of the Fusion of Horizons. This means being aware of differences and also understanding the origins of these patient differences. The third step is "respect," which involves setting aside our adherence to medical objectivity and scientific rigor to respect the ultimate healthcare decisions made by individuals regarding their lives. Discussion and Conclusion: After completing these three steps of VBM, we can return to the fifth step of EBM: Assess. Our assessment can now transcend the physical treatment focus of the initial steps to align with a holistic care philosophy.

Keywords: shared decision-making, evidence-based medicine, values-based medicine, holistic healthcare

Procedia PDF Downloads 52
4473 Elastoplastic Modified Stillinger Weber-Potential Based Discretized Virtual Internal Bond and Its Application to the Dynamic Fracture Propagation

Authors: Dina Kon Mushid, Kabutakapua Kakanda, Dibu Dave Mbako

Abstract:

The failure of material usually involves elastoplastic deformation and fracturing. Continuum mechanics can effectively deal with plastic deformation by using a yield function and the flow rule. At the same time, it has some limitations in dealing with the fracture problem since it is a theory based on the continuous field hypothesis. The lattice model can simulate the fracture problem very well, but it is inadequate for dealing with plastic deformation. Based on the discretized virtual internal bond model (DVIB), this paper proposes a lattice model that can account for plasticity. DVIB is a lattice method that considers material to comprise bond cells. Each bond cell may have any geometry with a finite number of bonds. The two-body or multi-body potential can characterize the strain energy of a bond cell. The two-body potential leads to the fixed Poisson ratio, while the multi-body potential can overcome the limitation of the fixed Poisson ratio. In the present paper, the modified Stillinger-Weber (SW), a multi-body potential, is employed to characterize the bond cell energy. The SW potential is composed of two parts. One part is the two-body potential that describes the interatomic interactions between particles. Another is the three-body potential that represents the bond angle interactions between particles. Because the SW interaction can represent the bond stretch and bond angle contribution, the SW potential-based DVIB (SW-DVIB) can represent the various Poisson ratios. To embed the plasticity in the SW-DVIB, the plasticity is considered in the two-body part of the SW potential. It is done by reducing the bond stiffness to a lower level once the bond reaches the yielding point. While before the bond reaches the yielding point, the bond is elastic. When the bond deformation exceeds the yielding point, the bond stiffness is softened to a lower value. When unloaded, irreversible deformation occurs. With the bond length increasing to a critical value, termed the failure bond length, the bond fails. The critical failure bond length is related to the cell size and the macro fracture energy. By this means, the fracture energy is conserved so that the cell size sensitivity problem is relieved to a great extent. In addition, the plasticity and the fracture are also unified at the bond level. To make the DVIB able to simulate different Poisson ratios, the three-body part of the SW potential is kept elasto-brittle. The bond angle can bear the moment before the bond angle increment is smaller than a critical value. By this method, the SW-DVIB can simulate the plastic deformation and the fracturing process of material with various Poisson ratios. The elastoplastic SW-DVIB is used to simulate the plastic deformation of a material, the plastic fracturing process, and the tunnel plastic deformation. It has been shown that the current SW-DVIB method is straightforward in simulating both elastoplastic deformation and plastic fracture.

Keywords: lattice model, discretized virtual internal bond, elastoplastic deformation, fracture, modified stillinger-weber potential

Procedia PDF Downloads 98
4472 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization

Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed

Abstract:

Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.

Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction

Procedia PDF Downloads 12
4471 Consideration of Failed Fuel Detector Location through Computational Flow Dynamics Analysis on Primary Cooling System Flow with Two Outlets

Authors: Sanghoon Bae, Hanju Cha

Abstract:

Failed fuel detector (FFD) in research reactor is a very crucial instrument to detect the anomaly from failed fuels in the early stage around primary cooling system (PCS) outlet prior to the decay tank. FFD is considered as a mandatory sensor to ensure the integrity of fuel assemblies and mitigate the consequence from a failed fuel accident. For the effective function of FFD, the location of them should be determined by contemplating the effect from coolant flow around two outlets. For this, the analysis on computational flow dynamics (CFD) should be first performed how the coolant outlet flow including radioactive materials from failed fuels are mixed and discharged through the outlet plenum within certain seconds. The analysis result shows that the outlet flow is well mixed regardless of the position of failed fuel and ultimately illustrates the effect of detector location.

Keywords: computational flow dynamics (CFD), failed fuel detector (FFD), fresh fuel assembly (FFA), spent fuel assembly (SFA)

Procedia PDF Downloads 240
4470 The Tourism Pattern Based on Lifestyle: A Case Study of Suzhou City in China

Authors: Ling Chen, Lanyan Peng

Abstract:

In the new round of institutional reform of the State Council, Ministry of Culture and Ministry of Tourism were formed into a new department, Ministry of Culture and Tourism, which embodied the idea of the fusion development of cultural and tourism industries. At the same time, domestic tourists pay more attention to the tourism experience and tourism quality. The tourism patterns have been changed from the sightseeing mode of the individual scenic spot to the lifestyle mode of feeling the cultural atmosphere of the tourist destination. Therefore, this paper focuses on the tourism pattern based on lifestyle, studies the development status, content, and implementation measures of the tourism pattern. As the tourism pattern based on lifestyle integrating cultural and tourism industries in-depth, tourists can experience the living atmosphere, living conditions and living quality of the tourist destination, and deeply understand the urban cultural connotation during the trip. Suzhou has taken a series of measures to build up a tourism pattern based on lifestyle-'Suzhou life' tourism, including regional planning of tourism, integration of cultural resources, construction of urban atmosphere, and upgrading infrastructure. 'Suzhou life' tourism is based on the Suzhou food (cooked wheaten food, dim sum, specialty snacks), tourist attractions (Suzhou gardens, the ancient city) and characteristic recreational ways (appreciating Kun opera, enjoying Suzhou Pingtan, tea drinking). And the continuous integration of the three components above meet the spiritual, cultural needs of tourists and upgrade the tourism pattern based on lifestyle. Finally, the paper puts forward the tourism pattern planning suggestions.

Keywords: tourism pattern, lifestyle, integration of cultural and tourism industries, Suzhou life

Procedia PDF Downloads 239
4469 Implementation of Integrated Multi-Channel Analysis of Surface Waves and Waveform Inversion Techniques for Seismic Hazard Estimation with Emphasis on Associated Uncertainty: A Case Study at Zafarana Wind Turbine Towers Farm, Egypt

Authors: Abd El-Aziz Khairy Abd El-Aal, Yuji Yagi, Heba Kamal

Abstract:

In this study, an integrated multi-channel analysis of Surface Waves (MASW) technique is applied to explore the geotechnical parameters of subsurface layers at the Zafarana wind farm. Moreover, a seismic hazard procedure based on the extended deterministic technique is used to estimate the seismic hazard load for the investigated area. The study area includes many active fault systems along the Gulf of Suez that cause many moderate and large earthquakes. Overall, the seismic activity of the area has recently become better understood following the use of new waveform inversion methods and software to develop accurate focal mechanism solutions for recent recorded earthquakes around the studied area. These earthquakes resulted in major stress-drops in the Eastern desert and the Gulf of Suez area. These findings have helped to reshape the understanding of the seismotectonic environment of the Gulf of Suez area, which is a perplexing tectonic domain. Based on the collected new information and data, this study uses an extended deterministic approach to re-examine the seismic hazard for the Gulf of Suez region, particularly the wind turbine towers at Zafarana Wind Farm and its vicinity. Alternate seismic source and magnitude-frequency relationships were combined with various indigenous attenuation relationships, adapted within a logic tree formulation, to quantify and project the regional exposure on a set of hazard maps. We select two desired exceedance probabilities (10 and 20%) that any of the applied scenarios may exceed the largest median ground acceleration. The ground motion was calculated at 50th, 84th percentile levels.

Keywords: MASW, seismic hazard, wind turbine towers, Zafarana wind farm

Procedia PDF Downloads 403
4468 A Survey of Baseband Architecture for Software Defined Radio

Authors: M. A. Fodha, H. Benfradj, A. Ghazel

Abstract:

This paper is a survey of recent works that proposes a baseband processor architecture for software defined radio. A classification of different approaches is proposed. The performance of each architecture is also discussed in order to clarify the suitable approaches that meet software-defined radio constraints.

Keywords: multi-core architectures, reconfigurable architectures, software defined radio, baseband processor

Procedia PDF Downloads 475