Search results for: satellite images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2918

Search results for: satellite images

1148 Optimized and Secured Digital Watermarking Using Fuzzy Entropy, Bezier Curve and Visual Cryptography

Authors: R. Rama Kishore, Sunesh

Abstract:

Recent development in the usage of internet for different purposes creates a great threat for the copyright protection of the digital images. Digital watermarking can be used to address the problem. This paper presents detailed review of the different watermarking techniques, latest trends in the field of secured, robust and imperceptible watermarking. It also discusses the different optimization techniques used in the field of watermarking in order to improve the robustness and imperceptibility of the method. Different measures are discussed to evaluate the performance of the watermarking algorithm. At the end, this paper proposes a watermarking algorithm using (2, 2) share visual cryptography and Bezier curve based algorithm to improve the security of the watermark. The proposed method uses fractional transformation to improve the robustness of the copyright protection of the method. The algorithm is optimized using fuzzy entropy for better results.

Keywords: digital watermarking, fractional transform, visual cryptography, Bezier curve, fuzzy entropy

Procedia PDF Downloads 368
1147 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India

Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel

Abstract:

Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.

Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM

Procedia PDF Downloads 248
1146 Image Compression Based on Regression SVM and Biorthogonal Wavelets

Authors: Zikiou Nadia, Lahdir Mourad, Ameur Soltane

Abstract:

In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.

Keywords: image compression, 2D discrete wavelet transform (DWT-2D), support vector regression (SVR), SVM Kernels, run-length, arithmetic coding

Procedia PDF Downloads 382
1145 Using Seismic and GPS Data for Hazard Estimation in Some Active Regions in Egypt

Authors: Abdel-Monem Sayed Mohamed

Abstract:

Egypt rapidly growing development is accompanied by increasing levels of standard living particular in its urban areas. However, there is a limited experience in quantifying the sources of risk management in Egypt and in designing efficient strategies to keep away serious impacts of earthquakes. From the historical point of view and recent instrumental records, there are some seismo-active regions in Egypt, where some significant earthquakes had occurred in different places. The special tectonic features in Egypt: Aswan, Greater Cairo, Red Sea and Sinai Peninsula regions are the territories of a high seismic risk, which have to be monitored by up-to date technologies. The investigations of the seismic events and interpretations led to evaluate the seismic hazard for disaster prevention and for the safety of the dense populated regions and the vital national projects as the High Dam. In addition to the monitoring of the recent crustal movements, the most powerful technique of satellite geodesy GPS are used where geodetic networks are covering such seismo-active regions. The results from the data sets are compared and combined in order to determine the main characteristics of the deformation and hazard estimation for specified regions. The final compiled output from the seismological and geodetic analysis threw lights upon the geodynamical regime of these seismo-active regions and put Aswan and Greater Cairo under the lowest class according to horizontal crustal strains classifications. This work will serve a basis for the development of so-called catastrophic models and can be further used for catastrophic risk management. Also, this work is trying to evaluate risk of large catastrophic losses within the important regions including the High Dam, strategic buildings and archeological sites. Studies on possible scenarios of earthquakes and losses are a critical issue for decision making in insurance as a part of mitigation measures.

Keywords: b-value, Gumbel distribution, seismic and GPS data, strain parameters

Procedia PDF Downloads 460
1144 Modelling Affordable Waste Management Solutions for India

Authors: Pradip Baishya, D. K. Mahanta

Abstract:

Rapid and unplanned urbanisation in most cities of India has progressively increased the problem of managing municipal waste in the past few years. With insufficient infrastructure and funds, Municipalities in most cities are struggling to cope with the pace of waste generated. Open dumping is widely in practice as a cheaper option. Scientific disposal of waste in such a large scale with the elements of segregation, recycling, landfill, and incineration involves sophisticated and expensive plants. In an effort to finding affordable and simple solutions to address this burning issue of waste disposal, a semi-mechanized plant has been designed underlying the concept of a zero waste community. The fabrication work of the waste management unit is carried out by local skills from locally available materials. A resident colony in the city of Guwahati has been chosen, which is seen as a typical representative of most cities in India in terms of size and key issues surrounding waste management. Scientific management and disposal of waste on site is carried out on the principle of reduce, reuse and recycle from segregation to compositing. It is a local community participatory model, which involves all stakeholders in the process namely rag pickers, residents, municipality and local industry. Studies were conducted to testify the plant as revenue earning self-sustaining model in the long term. Current working efficiency of plant for segregation was found to be 1kg per minute. Identifying bottlenecks in the success of the model, data on efficiency of the plant, economics of its fabrication were part of the study. Similar satellite waste management plants could potentially be a solution to supplement the waste management system of municipalities of similar sized cities in India or South East Asia with similar issues surrounding waste disposal.

Keywords: affordable, rag pickers, recycle, reduce, reuse, segregation, zero waste

Procedia PDF Downloads 305
1143 Shock Formation for Double Ramp Surface

Authors: Abdul Wajid Ali

Abstract:

Supersonic flight promises speed, but the design of the air inlet faces an obstacle: shock waves. They prevent air flow in the mixed compression ports, which reduces engine performance. Our research investigates this using supersonic wind tunnels and schlieren imaging to reveal the complex dance between shock waves and airflow. The findings show clear patterns of shock wave formation influenced by internal/external pressure surfaces. We looked at the boundary layer, the slow-moving air near the inlet walls, and its interaction with shock waves. In addition, the study emphasizes the dependence of the shock wave behaviour on the Mach number, which highlights the need for adaptive models. This knowledge is key to optimizing the combined compression inputs, paving the way for more powerful and efficient supersonic vehicles. Future engineers can use this knowledge to improve existing designs and explore innovative configurations for next-generation ultrasonic applications.

Keywords: oblique shock formation, boundary layer interaction, schlieren images, double wedge surface

Procedia PDF Downloads 68
1142 Impact of Geomagnetic Storm on Ionosphere

Authors: Affan Ahmed

Abstract:

This research investigates the impact of the geomagnetic storm occurring from April 22 to April 26, 2023, on the Earth’s ionosphere, with a focus on analyzing specific ionospheric parameters to understand the storm's effects on ionospheric stability and GNSS signal propagation. Geomagnetic storms, caused by intensified solar wind-magnetosphere interactions, can significantly disturb ionospheric conditions, impacting electron density, Total Electron Content (TEC), and thermospheric composition. Such disturbances are particularly relevant to satellite-based navigation and communication systems, as fluctuations in ionospheric parameters can degrade signal integrity and reliability. In this study, data were obtained from multiple sources, including OMNIWeb for parameters like Dst, Kp, Bz, Electric Field, and solar wind pressure, GUVI for O/N₂ ratio maps, and TEC data from low-, mid-, and high-latitude stations available on the IONOLAB website. Additional Equatorial Electrojet (EEJ) and geomagnetic data were acquired from INTERMAGNET. The methodology involved comparing storm-affected data from April 22 to April 26 with quiet days in April 2023, using statistical and wavelet analysis to assess variations in parameters like TEC, O/N₂ ratio, and geomagnetic indices. The results show pronounced fluctuations in TEC and other ionospheric parameters during the main phase of the storm, with spatial variations observed across latitudes, highlighting the global response of the ionosphere to geomagnetic disturbances. The findings underline the storm’s significant impact on ionospheric composition, particularly in mid- and high-latitude regions, which correlates with increased GNSS signal interference in these areas. This study contributes to understanding the ionosphere’s response to geomagnetic activity, emphasizing the need for robust models to predict and mitigate space weather effects on GNSS-dependent technologies.

Keywords: geomagnetic storms, ionospheric disturbances, space weather effects, magnetosphere-ionosphere coupling

Procedia PDF Downloads 12
1141 Providing a Secure Hybrid Method for Graphical Password Authentication to Prevent Shoulder Surfing, Smudge and Brute Force Attack

Authors: Faraji Sepideh

Abstract:

Nowadays, purchase rate of the smart device is increasing and user authentication is one of the important issues in information security. Alphanumeric strong passwords are difficult to memorize and also owners write them down on papers or save them in a computer file. In addition, text password has its own flaws and is vulnerable to attacks. Graphical password can be used as an alternative to alphanumeric password that users choose images as a password. This type of password is easier to use and memorize and also more secure from pervious password types. In this paper we have designed a more secure graphical password system to prevent shoulder surfing, smudge and brute force attack. This scheme is a combination of two types of graphical passwords recognition based and Cued recall based. Evaluation the usability and security of our proposed scheme have been explained in conclusion part.

Keywords: brute force attack, graphical password, shoulder surfing attack, smudge attack

Procedia PDF Downloads 162
1140 Open-Source YOLO CV For Detection of Dust on Solar PV Surface

Authors: Jeewan Rai, Kinzang, Yeshi Jigme Choden

Abstract:

Accumulation of dust on solar panels impacts the overall efficiency and the amount of energy they produce. While various techniques exist for detecting dust to schedule cleaning, many of these methods use MATLAB image processing tools and other licensed software, which can be financially burdensome. This study will investigate the efficiency of a free open-source computer vision library using the YOLO algorithm. The proposed approach has been tested on images of solar panels with varying dust levels through an experiment setup. The experimental findings illustrated the effectiveness of using the YOLO-based image classification method and the overall dust detection approach with an accuracy of 90% in distinguishing between clean and dusty panels. This open-source solution provides a cost effective and accessible alternative to commercial image processing tools, offering solutions for optimizing solar panel maintenance and enhancing energy production.

Keywords: YOLO, openCV, dust detection, solar panels, computer vision, image processing

Procedia PDF Downloads 36
1139 Metal-Semiconductor-Metal Photodetector Based on Porous In0.08Ga0.92N

Authors: Saleh H. Abud, Z. Hassan, F. K. Yam

Abstract:

Characteristics of MSM photodetector based on a porous In0.08Ga0.92N thin film were reported. Nanoporous structures of n-type In0.08Ga0.92N/AlN/Si thin films were synthesized by photoelectrochemical (PEC) etching at a ratio of 1:4 of HF:C2H5OH solution for 15 min. The structural and optical properties of pre- and post-etched thin films were investigated. Field emission scanning electron microscope and atomic force microscope images showed that the pre-etched thin film has a sufficiently smooth surface over a large region and the roughness increased for porous film. Blue shift has been observed in photoluminescence emission peak at 300 K for porous sample. The photoluminescence intensity of the porous film indicated that the optical properties have been enhanced. A high work function metals (Pt and Ni) were deposited as a metal contact on the porous films. The rise and recovery times of the devices were investigated at 390 nm chopped light. Finally, the sensitivity and quantum efficiency were also studied.

Keywords: porous InGaN, photoluminescence, SMS photodetector, atomic force microscopy

Procedia PDF Downloads 489
1138 Text Based Shuffling Algorithm on Graphics Processing Unit for Digital Watermarking

Authors: Zayar Phyo, Ei Chaw Htoon

Abstract:

In a New-LSB based Steganography method, the Fisher-Yates algorithm is used to permute an existing array randomly. However, that algorithm performance became slower and occurred memory overflow problem while processing the large dimension of images. Therefore, the Text-Based Shuffling algorithm aimed to select only necessary pixels as hiding characters at the specific position of an image according to the length of the input text. In this paper, the enhanced text-based shuffling algorithm is presented with the powered of GPU to improve more excellent performance. The proposed algorithm employs the OpenCL Aparapi framework, along with XORShift Kernel including the Pseudo-Random Number Generator (PRNG) Kernel. PRNG is applied to produce random numbers inside the kernel of OpenCL. The experiment of the proposed algorithm is carried out by practicing GPU that it can perform faster-processing speed and better efficiency without getting the disruption of unnecessary operating system tasks.

Keywords: LSB based steganography, Fisher-Yates algorithm, text-based shuffling algorithm, OpenCL, XORShiftKernel

Procedia PDF Downloads 151
1137 Features Vector Selection for the Recognition of the Fragmented Handwritten Numeric Chains

Authors: Salim Ouchtati, Aissa Belmeguenai, Mouldi Bedda

Abstract:

In this study, we propose an offline system for the recognition of the fragmented handwritten numeric chains. Firstly, we realized a recognition system of the isolated handwritten digits, in this part; the study is based mainly on the evaluation of neural network performances, trained with the gradient backpropagation algorithm. The used parameters to form the input vector of the neural network are extracted from the binary images of the isolated handwritten digit by several methods: the distribution sequence, sondes application, the Barr features, and the centered moments of the different projections and profiles. Secondly, the study is extended for the reading of the fragmented handwritten numeric chains constituted of a variable number of digits. The vertical projection was used to segment the numeric chain at isolated digits and every digit (or segment) was presented separately to the entry of the system achieved in the first part (recognition system of the isolated handwritten digits).

Keywords: features extraction, handwritten numeric chains, image processing, neural networks

Procedia PDF Downloads 266
1136 Change Detection Method Based on Scale-Invariant Feature Transformation Keypoints and Segmentation for Synthetic Aperture Radar Image

Authors: Lan Du, Yan Wang, Hui Dai

Abstract:

Synthetic aperture radar (SAR) image change detection has recently become a challenging problem owing to the existence of speckle noises. In this paper, an unsupervised distribution-free change detection for SAR image based on scale-invariant feature transform (SIFT) keypoints and segmentation is proposed. Firstly, the noise-robust SIFT keypoints which reveal the blob-like structures in an image are extracted in the log-ratio image to reduce the detection range. Then, different from the traditional change detection which directly obtains the change-detection map from the difference image, segmentation is made around the extracted keypoints in the two original multitemporal SAR images to obtain accurate changed region. At last, the change-detection map is generated by comparing the two segmentations. Experimental results on the real SAR image dataset demonstrate the effectiveness of the proposed method.

Keywords: change detection, Synthetic Aperture Radar (SAR), Scale-Invariant Feature Transformation (SIFT), segmentation

Procedia PDF Downloads 387
1135 Measurement of Intermediate Slip Rate of Sabzpushan Fault Zone in Southwestern Iran, Using Optically Stimulated Luminescence (OSL) Dating

Authors: Iman Nezamzadeh, Ali Faghih, Behnam Oveisi

Abstract:

In order to reduce earthquake hazards in urban areas, it is necessary to perform comprehensive studies to understand the dynamics of the active faults and identify potentially high risk areas. The fault slip-rates in Late Quaternary sediment are critical indicators of seismic hazard and also provide valuable data to recognize young crustal deformations. To measure slip-rates accurately, is needed to displacement of geomorphic markers and ages of quaternary sediment samples of alluvial deposit that deformed by movements on fault. In this study we produced information about Intermediate term slip rate of Sabzpushan Fault Zone (SPF) within the central part of the Zagros Mountains of Iran using OSL dating technique to make better analysis of seismic hazard and seismic risk reduction for Shiraz city. For this purpose identifiable geomorphic fluvial surfaces help us to provide a reference frame to determine differential or absolute horizontal and vertical deformation. Optically stimulated luminescence (OSL) is an alternative and independent method of determining the burial age of mineral grains in Quaternary sediments. Field observation and satellite imagery show geomorphic markers that deformed horizontally along the Sabzpoushan Fault. Here, drag folds is forming because of evaporites material of Miocen Formation. We estimate 2.8±0.5 mm/yr (mm/y) horizontal slip rate along the Sabzpushan fault zone, where ongoing deformation is involve with drug folding. The Soltan synclinal structure, close to the Sabzpushan fault, shows slight uplift rate due to active core-extrousion.

Keywords: slip rate, active tectonics, OSL, geomorphic markers, Sabzpushan Fault Zone, Zagros, Iran

Procedia PDF Downloads 351
1134 Influence of Geomagnetic Storms on Ionospheric Parameters

Authors: Affan Ahmed

Abstract:

This research investigates the Influence of geomagnetic storm occurring from April 22 to April 26, 2023, on the Earth’s ionosphere, with a focus on analyzing specific ionospheric parameters to understand the storm's effects on ionospheric stability and GNSS signal propagation. Geomagnetic storms, caused by intensified solar wind-magnetosphere interactions, can significantly disturb ionospheric conditions, impacting electron density, Total Electron Content (TEC), and thermospheric composition. Such disturbances are particularly relevant to satellite-based navigation and communication systems, as fluctuations in ionospheric parameters can degrade signal integrity and reliability. In this study, data were obtained from multiple sources, including OMNIWeb for parameters like Dst, Kp, Bz, Electric Field, and solar wind pressure, GUVI for O/N₂ ratio maps, and TEC data from low-, mid-, and high-latitude stations available on the IONOLAB website. Additional Equatorial Electrojet (EEJ) and geomagnetic data were acquired from INTERMAGNET. The methodology involved comparing storm-affected data from April 22 to April 26 with quiet days in April 2023, using statistical and wavelet analysis to assess variations in parameters like TEC, O/N₂ ratio, and geomagnetic indices. The results show pronounced fluctuations in TEC and other ionospheric parameters during the main phase of the storm, with spatial variations observed across latitudes, highlighting the global response of the ionosphere to geomagnetic disturbances. The findings underline the storm’s significant impact on ionospheric composition, particularly in mid- and high-latitude regions, which correlates with increased GNSS signal interference in these areas. This study contributes to understanding the ionosphere’s response to geomagnetic activity, emphasizing the need for robust models to predict and mitigate space weather effects on GNSS-dependent technologies.

Keywords: geomagnetic storms, ionospheric disturbances, space weather effects, magnetosphere-ionopheric coupling

Procedia PDF Downloads 11
1133 Analysis of Q-Learning on Artificial Neural Networks for Robot Control Using Live Video Feed

Authors: Nihal Murali, Kunal Gupta, Surekha Bhanot

Abstract:

Training of artificial neural networks (ANNs) using reinforcement learning (RL) techniques is being widely discussed in the robot learning literature. The high model complexity of ANNs along with the model-free nature of RL algorithms provides a desirable combination for many robotics applications. There is a huge need for algorithms that generalize using raw sensory inputs, such as vision, without any hand-engineered features or domain heuristics. In this paper, the standard control problem of line following robot was used as a test-bed, and an ANN controller for the robot was trained on images from a live video feed using Q-learning. A virtual agent was first trained in simulation environment and then deployed onto a robot’s hardware. The robot successfully learns to traverse a wide range of curves and displays excellent generalization ability. Qualitative analysis of the evolution of policies, performance and weights of the network provide insights into the nature and convergence of the learning algorithm.

Keywords: artificial neural networks, q-learning, reinforcement learning, robot learning

Procedia PDF Downloads 373
1132 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 247
1131 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 351
1130 Data Gathering and Analysis for Arabic Historical Documents

Authors: Ali Dulla

Abstract:

This paper introduces a new dataset (and the methodology used to generate it) based on a wide range of historical Arabic documents containing clean data simple and homogeneous-page layouts. The experiments are implemented on printed and handwritten documents obtained respectively from some important libraries such as Qatar Digital Library, the British Library and the Library of Congress. We have gathered and commented on 150 archival document images from different locations and time periods. It is based on different documents from the 17th-19th century. The dataset comprises differing page layouts and degradations that challenge text line segmentation methods. Ground truth is produced using the Aletheia tool by PRImA and stored in an XML representation, in the PAGE (Page Analysis and Ground truth Elements) format. The dataset presented will be easily available to researchers world-wide for research into the obstacles facing various historical Arabic documents such as geometric correction of historical Arabic documents.

Keywords: dataset production, ground truth production, historical documents, arbitrary warping, geometric correction

Procedia PDF Downloads 169
1129 Satire of Victorian Mores in Charles Dickens’ Great Expectations

Authors: Nagwa Abouserie Soliman

Abstract:

The Victorian era, which started with the reign of Queen Victoria from June 1837 to January 1901, could be considered as one of the most significant eras that had a crucial impact which formed contemporary British life despite the fact that with the rise of the British empire many negative aspects surfaced, namelysocial inequalities such as class differences, child labor, population increase and poverty due to the industrial revolution. Charles Dickens was one of the most prominent writers of the Victorian era who perceived the hypocrisy of the Victorian mores. The appropriate researchstyle that was chosen for this literary analysis is a qualitative research method in which the researcher used the conceptual approach to analyse theDickensian characterisation andwriting style through diction, narrative voice, and images. The aim of this paper is to argue that Charles Dickens inGreat Expectations (1861) was highly satirical of the Victorian mores, as he uses a lot of sharp irony-to satirize various Victorian traditions such as class divisions, the justice system, the poor working class, and the upper-class snobbery that he thought are inhumane and unjust.

Keywords: victorian, child labour, poverty, class division, snobbery

Procedia PDF Downloads 124
1128 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities

Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun

Abstract:

The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.

Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids

Procedia PDF Downloads 68
1127 Study on Discontinuity Properties of Phased-Array Ultrasound Transducer Affecting to Sound Pressure Fields Pattern

Authors: Tran Trong Thang, Nguyen Phan Kien, Trinh Quang Duc

Abstract:

The phased-array ultrasound transducer types are utilities for medical ultrasonography as well as optical imaging. However, their discontinuity characteristic limits the applications due to the artifacts contaminated into the reconstructed images. Because of the effects of the ultrasound pressure field pattern to the echo ultrasonic waves as well as the optical modulated signal, the side lobes of the focused ultrasound beam induced by discontinuity of the phased-array ultrasound transducer might the reason of the artifacts. In this paper, a simple method in approach of numerical simulation was used to investigate the limitation of discontinuity of the elements in phased-array ultrasound transducer and their effects to the ultrasound pressure field. Take into account the change of ultrasound pressure field patterns in the conditions of variation of the pitches between elements of the phased-array ultrasound transducer, the appropriated parameters for phased-array ultrasound transducer design were asserted quantitatively.

Keywords: phased-array ultrasound transducer, sound pressure pattern, discontinuous sound field, numerical visualization

Procedia PDF Downloads 507
1126 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 71
1125 Female Dis-Empowerment in Contemporary Zimbabwe: A Re-Look at Shona Writers’ Vision of the Factors and Solutions

Authors: Godwin Makaudze

Abstract:

The majority of women in contemporary Zimbabwe continue to hold marginalised and insignificant positions in society and to be accorded negative and stereotyped images in literature. In light of this, government and civic organisations and even writers channel many resources, time, and efforts towards the emancipation of the female gender. Using the Africana womanist and socio-historical literary theories and focussing on two post-colonial novels, this paper re-engages the dis-empowerment of women in contemporary Zimbabwe, examining the believed causes and suggested solutions. The paper observes that the writers whip the already whipped by blaming patriarchy, African men and cultural practices as the underlying causes of such a sorry state of affairs while at the same time celebrating war against all these, as well as education, unity among women, Christianity and single motherhood as panaceas to the problem. The paper concludes that the writers’ anger is misdirected as they have fallen trap to the very popular yet mythical victim-blame motif espoused by many writers who focus on Shona people’s problems.

Keywords: cultural practices, female dis-empowerment, patriarchy, Shona novel, solutions, Zimbabwe

Procedia PDF Downloads 334
1124 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important

Authors: Eleni Karasavvidou

Abstract:

Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.

Keywords: representations, context analysis, reviews, sexist stereotypes

Procedia PDF Downloads 85
1123 Optimization of Perfusion Distribution in Custom Vascular Stent-Grafts Through Patient-Specific CFD Models

Authors: Scott M. Black, Craig Maclean, Pauline Hall Barrientos, Konstantinos Ritos, Asimina Kazakidi

Abstract:

Aortic aneurysms and dissections are leading causes of death in cardiovascular disease. Both inevitably lead to hemodynamic instability without surgical intervention in the form of vascular stent-graft deployment. An accurate description of the aortic geometry and blood flow in patient-specific cases is vital for treatment planning and long-term success of such grafts, as they must generate physiological branch perfusion and in-stent hemodynamics. The aim of this study was to create patient-specific computational fluid dynamics (CFD) models through a multi-modality, multi-dimensional approach with boundary condition optimization to predict branch flow rates and in-stent hemodynamics in custom stent-graft configurations. Three-dimensional (3D) thoracoabdominal aortae were reconstructed from four-dimensional flow-magnetic resonance imaging (4D Flow-MRI) and computed tomography (CT) medical images. The former employed a novel approach to generate and enhance vessel lumen contrast via through-plane velocity at discrete, user defined cardiac time steps post-hoc. To produce patient-specific boundary conditions (BCs), the aortic geometry was reduced to a one-dimensional (1D) model. Thereafter, a zero-dimensional (0D) 3-Element Windkessel model (3EWM) was coupled to each terminal branch to represent the distal vasculature. In this coupled 0D-1D model, the 3EWM parameters were optimized to yield branch flow waveforms which are representative of the 4D Flow-MRI-derived in-vivo data. Thereafter, a 0D-3D CFD model was created, utilizing the optimized 3EWM BCs and a 4D Flow-MRI-obtained inlet velocity profile. A sensitivity analysis on the effects of stent-graft configuration and BC parameters was then undertaken using multiple stent-graft configurations and a range of distal vasculature conditions. 4D Flow-MRI granted unparalleled visualization of blood flow throughout the cardiac cycle in both the pre- and postsurgical states. Segmentation and reconstruction of healthy and stented regions from retrospective 4D Flow-MRI images also generated 3D models with geometries which were successfully validated against their CT-derived counterparts. 0D-1D coupling efficiently captured branch flow and pressure waveforms, while 0D-3D models also enabled 3D flow visualization and quantification of clinically relevant hemodynamic parameters for in-stent thrombosis and graft limb occlusion. It was apparent that changes in 3EWM BC parameters had a pronounced effect on perfusion distribution and near-wall hemodynamics. Results show that the 3EWM parameters could be iteratively changed to simulate a range of graft limb diameters and distal vasculature conditions for a given stent-graft to determine the optimal configuration prior to surgery. To conclude, this study outlined a methodology to aid in the prediction post-surgical branch perfusion and in-stent hemodynamics in patient specific cases for the implementation of custom stent-grafts.

Keywords: 4D flow-MRI, computational fluid dynamics, vascular stent-grafts, windkessel

Procedia PDF Downloads 181
1122 A Reliable Multi-Type Vehicle Classification System

Authors: Ghada S. Moussa

Abstract:

Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.

Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm

Procedia PDF Downloads 359
1121 Undersea Communications Infrastructure: Risks, Opportunities, and Geopolitical Considerations

Authors: Lori W. Gordon, Karen A. Jones

Abstract:

Today’s high-speed data connectivity depends on a vast global network of infrastructure across space, air, land, and sea, with undersea cable infrastructure (UCI) serving as the primary means for intercontinental and ‘long-haul’ communications. The UCI landscape is changing and includes an increasing variety of state actors, such as the growing economies of Brazil, Russia, India, China, and South Africa. Non-state commercial actors, such as hyper-scale content providers including Google, Facebook, Microsoft, and Amazon, are also seeking to control their data and networks through significant investments in submarine cables. Active investments by both state and non-state actors will invariably influence the growth, geopolitics, and security of this sector. Beyond these hyper-scale content providers, there are new commercial satellite communication providers. These new players include traditional geosynchronous (GEO) satellites that offer broad coverage, high throughput GEO satellites offering high capacity with spot beam technology, low earth orbit (LEO) ‘mega constellations’ – global broadband services. And potential new entrants such as High Altitude Platforms (HAPS) offer low latency connectivity, LEO constellations offer high-speed optical mesh networks, i.e., ‘fiber in the sky.’ This paper focuses on understanding the role of submarine cables within the larger context of the global data commons, spanning space, terrestrial, air, and sea networks, including an analysis of national security policy and geopolitical implications. As network operators and commercial and government stakeholders plan for emerging technologies and architectures, hedging risks for future connectivity will ensure that our data backbone will be secure for years to come.

Keywords: communications, global, infrastructure, technology

Procedia PDF Downloads 89
1120 Saudi and U.S. Newspaper Coverage of Saudi Vision 2030 Concerning Women in Online Newspapers

Authors: Ziyad Alghamdi

Abstract:

This research investigates how issues concerning Saudi women have been represented in selected U.S. and Saudi publications. Saudi Vision 2030 is the Kingdom of Saudi Arabia's development strategy, which was revealed on April 25, 2016. This study used 115 news items across selected newspapers as its sampling. The New York Times and the Washington Post were chosen to represent U.S. newspapers and picked two Saudi newspapers, Al Jazirah, and Al Watan. This research examines how these issues were covered before and during the implementation of Saudi Vision 2030. The news pieces were analyzed using both quantitative and qualitative methodologies. The qualitative study employed an inductive technique to uncover frames. Furthermore, this work looked at how American and Saudi publications had framed Saudi women depicted in images by reviewing the photographs used in news reports about Saudi women's issues. The primary conclusion implies that the human-interest frame was more prevalent in American media, whereas the economic frame was more prevalent in Saudi publications. A variety of diverse topics were considered.

Keywords: Saudi newspapers, Saudi Vision 2030, framing theory, Saudi women

Procedia PDF Downloads 89
1119 Multi-Classification Deep Learning Model for Diagnosing Different Chest Diseases

Authors: Bandhan Dey, Muhsina Bintoon Yiasha, Gulam Sulaman Choudhury

Abstract:

Chest disease is one of the most problematic ailments in our regular life. There are many known chest diseases out there. Diagnosing them correctly plays a vital role in the process of treatment. There are many methods available explicitly developed for different chest diseases. But the most common approach for diagnosing these diseases is through X-ray. In this paper, we proposed a multi-classification deep learning model for diagnosing COVID-19, lung cancer, pneumonia, tuberculosis, and atelectasis from chest X-rays. In the present work, we used the transfer learning method for better accuracy and fast training phase. The performance of three architectures is considered: InceptionV3, VGG-16, and VGG-19. We evaluated these deep learning architectures using public digital chest x-ray datasets with six classes (i.e., COVID-19, lung cancer, pneumonia, tuberculosis, atelectasis, and normal). The experiments are conducted on six-classification, and we found that VGG16 outperforms other proposed models with an accuracy of 95%.

Keywords: deep learning, image classification, X-ray images, Tensorflow, Keras, chest diseases, convolutional neural networks, multi-classification

Procedia PDF Downloads 93