Search results for: digital image
3135 Offline Signature Verification Using Minutiae and Curvature Orientation
Authors: Khaled Nagaty, Heba Nagaty, Gerard McKee
Abstract:
A signature is a behavioral biometric that is used for authenticating users in most financial and legal transactions. Signatures can be easily forged by skilled forgers. Therefore, it is essential to verify whether a signature is genuine or forged. The aim of any signature verification algorithm is to accommodate the differences between signatures of the same person and increase the ability to discriminate between signatures of different persons. This work presented in this paper proposes an automatic signature verification system to indicate whether a signature is genuine or not. The system comprises four phases: (1) The pre-processing phase in which image scaling, binarization, image rotation, dilation, thinning, and connecting ridge breaks are applied. (2) The feature extraction phase in which global and local features are extracted. The local features are minutiae points, curvature orientation, and curve plateau. The global features are signature area, signature aspect ratio, and Hu moments. (3) The post-processing phase, in which false minutiae are removed. (4) The classification phase in which features are enhanced before feeding it into the classifier. k-nearest neighbors and support vector machines are used. The classifier was trained on a benchmark dataset to compare the performance of the proposed offline signature verification system against the state-of-the-art. The accuracy of the proposed system is 92.3%.Keywords: signature, ridge breaks, minutiae, orientation
Procedia PDF Downloads 1493134 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 1843133 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies
Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour
Abstract:
The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop
Procedia PDF Downloads 1423132 Influence of Optical Fluence Distribution on Photoacoustic Imaging
Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim
Abstract:
Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.Keywords: finite element method, fluence distribution, Monte Carlo method, photoacoustic imaging
Procedia PDF Downloads 3783131 Active Noise Cancellation in the Rectangular Enclosure Systems
Authors: D. Shakirah Shukor, A. Aminudin, Hashim U. A., Waziralilah N. Fathiah, T. Vikneshvaran
Abstract:
The interior noise control is essential to be explored due to the interior acoustic analysis is significant in the systems such as automobiles, aircraft, air-handling system and diesel engine exhausts system. In this research, experimental work was undertaken for canceling an active noise in the rectangular enclosure. The rectangular enclosure was fabricated with multiple speakers and microphones inside the enclosure. A software program using digital signal processing is implemented to evaluate the proposed method. Experimental work was conducted to obtain the acoustic behavior and characteristics of the rectangular enclosure and noise cancellation based on active noise control in low-frequency range. Noise is generated by using multispeaker inside the enclosure and microphones are used for noise measurements. The technique for noise cancellation relies on the principle of destructive interference between two sound fields in the rectangular enclosure. One field is generated by the original or primary sound source, the other by a secondary sound source set up to interfere with, and cancel, that unwanted primary sound. At the end of this research, the result of output noise before and after cancellation are presented and discussed. On the basis of the findings presented in this research, an active noise cancellation in the rectangular enclosure is worth exploring in order to improve the noise control technologies.Keywords: active noise control, digital signal processing, noise cancellation, rectangular enclosure
Procedia PDF Downloads 2733130 Characterization of Aquifer Systems and Identification of Potential Groundwater Recharge Zones Using Geospatial Data and Arc GIS in Kagandi Water Supply System Well Field
Authors: Aijuka Nicholas
Abstract:
A research study was undertaken to characterize the aquifers and identify the potential groundwater recharge zones in the Kagandi district. Quantitative characterization of hydraulic conductivities of aquifers is of fundamental importance to the study of groundwater flow and contaminant transport in aquifers. A conditional approach is used to represent the spatial variability of hydraulic conductivity. Briefly, it involves using qualitative and quantitative geologic borehole-log data to generate a three-dimensional (3D) hydraulic conductivity distribution, which is then adjusted through calibration of a 3D groundwater flow model using pumping-test data and historic hydraulic data. The approach consists of several steps. The study area was divided into five sub-watersheds on the basis of artificial drainage divides. A digital terrain model (DTM) was developed using Arc GIS to determine the general drainage pattern of Kagandi watershed. Hydrologic characterization involved the determination of the various hydraulic properties of the aquifers. Potential groundwater recharge zones were identified by integrating various thematic maps pertaining to the digital elevation model, land use, and drainage pattern in Arc GIS and Sufer golden software. The study demonstrates the potential of GIS in delineating groundwater recharge zones and that the developed methodology will be applicable to other watersheds in Uganda.Keywords: aquifers, Arc GIS, groundwater recharge, recharge zones
Procedia PDF Downloads 1483129 Dematerialized Beings in Katherine Dunn's Geek Love: A Corporeal and Ethical Study under Posthumanities
Authors: Anum Javed
Abstract:
This study identifies the dynamical image of human body that continues its metamorphosis in the virtual field of reality. It calls attention to the ways where humans start co-evolving with other life forms; technology in particular and are striving to establish a realm outside the physical framework of matter. The problem exceeds the area of technological ethics by explicably and explanatorily entering the space of literary texts and criticism. Textual analysis of Geek Love (1989) by Katherine Dunn is adjoined with posthumanist perspectives of Pramod K. Nayar to beget psycho-somatic changes in man’s nature of being. It uncovers the meaning people give to their experiences in this budding social and cultural phenomena of material representation tied up with personal practices and technological innovations. It also observes an ethical, physical and psychological reassessment of man within the context of technological evolutions. The study indicates the elements that have rendered morphological freedom and new materialism in man’s consciousness. Moreover this work is inquisitive of what it means to be a human in this time of accelerating change where surgeries, implants, extensions, cloning and robotics have shaped a new sense of being. It attempts to go beyond individual’s body image and explores how objectifying media and culture have influenced people’s judgement of others on new material grounds. It further argues a decentring of the glorified image of man as an independent entity because of his energetic partnership with intelligent machines and external agents. The history of the future progress of technology is also mentioned. The methodology adopted is posthumanist techno-ethical textual analysis. This work necessitates a negotiating relationship between man and technology in order to achieve harmonic and balanced interconnected existence. The study concludes by recommending a call for an ethical set of codes to be cultivated for the techno-human habituation. Posthumanism ushers a strong need of adopting new ethics within the terminology of neo-materialist humanism.Keywords: corporeality, dematerialism, human ethos, posthumanism
Procedia PDF Downloads 1483128 Internet Memes: A Mirror of Culture and Society
Authors: Alexandra-Monica Toma
Abstract:
As the internet became a ruling force of society, computer-mediated communication has enriched its methods to convey meaning by combining linguistic means to visual means of expressivity. One of the elements of cyberspace is what we call a meme, a succinct, visually engaging tool used to communicate ideas or emotions, usually in a funny or ironic manner. Coined by Richard Dawkings in the late 1970s to refer to cultural genes, this term now denominates a special type of vernacular language used to share content on the internet. This research aims to analyse the basic mechanism that stands at the basis of meme creation as a blend of innovation and imitation and will approach some of the most widely used image macros remixed to generate new content while also pointing out success strategies. Moreover, this paper discusses whether memes can transcend the light-hearted and playful mood they mirror and become biting and sharp cultural comments. The study also uses the concept of multimodality and stresses how the text interacts with image, discussing three types of relations between the two: symmetry, amplification, and contradiction. We will furthermore show that memes are cultural artifacts and virtual tropes highly dependent on context and societal issues by using a corpus of memes created related to the COVID-19 pandemic.Keywords: context, computer-mediated communication, memes, multimodality
Procedia PDF Downloads 1853127 Detection of Micro-Unmanned Ariel Vehicles Using a Multiple-Input Multiple-Output Digital Array Radar
Authors: Tareq AlNuaim, Mubashir Alam, Abdulrazaq Aldowesh
Abstract:
The usage of micro-Unmanned Ariel Vehicles (UAVs) has witnessed an enormous increase recently. Detection of such drones became a necessity nowadays to prevent any harmful activities. Typically, such targets have low velocity and low Radar Cross Section (RCS), making them indistinguishable from clutter and phase noise. Multiple-Input Multiple-Output (MIMO) Radars have many potentials; it increases the degrees of freedom on both transmit and receive ends. Such architecture allows for flexibility in operation, through utilizing the direct access to every element in the transmit/ receive array. MIMO systems allow for several array processing techniques, permitting the system to stare at targets for longer times, which improves the Doppler resolution. In this paper, a 2×2 MIMO radar prototype is developed using Software Defined Radio (SDR) technology, and its performance is evaluated against a slow-moving low radar cross section micro-UAV used by hobbyists. Radar cross section simulations were carried out using FEKO simulator, achieving an average of -14.42 dBsm at S-band. The developed prototype was experimentally evaluated achieving more than 300 meters of detection range for a DJI Mavic pro-droneKeywords: digital beamforming, drone detection, micro-UAV, MIMO, phased array
Procedia PDF Downloads 1403126 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Human action recognition modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view football datasets. Our HMR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH multi-view football datasets, respectively.Keywords: computer vision, human motion analysis, random forest, machine learning
Procedia PDF Downloads 423125 Optimizing Pediatric Pneumonia Diagnosis with Lightweight MobileNetV2 and VAE-GAN Techniques in Chest X-Ray Analysis
Authors: Shriya Shukla, Lachin Fernando
Abstract:
Pneumonia, a leading cause of mortality in young children globally, presents significant diagnostic challenges, particularly in resource-limited settings. This study presents an approach to diagnosing pediatric pneumonia using Chest X-Ray (CXR) images, employing a lightweight MobileNetV2 model enhanced with synthetic data augmentation. Addressing the challenge of dataset scarcity and imbalance, the study used a Variational Autoencoder-Generative Adversarial Network (VAE-GAN) to generate synthetic CXR images, improving the representation of normal cases in the pediatric dataset. This approach not only addresses the issues of data imbalance and scarcity prevalent in medical imaging but also provides a more accessible and reliable diagnostic tool for early pneumonia detection. The augmented data improved the model’s accuracy and generalization, achieving an overall accuracy of 95% in pneumonia detection. These findings highlight the efficacy of the MobileNetV2 model, offering a computationally efficient yet robust solution well-suited for resource-constrained environments such as mobile health applications. This study demonstrates the potential of synthetic data augmentation in enhancing medical image analysis for critical conditions like pediatric pneumonia.Keywords: pneumonia, MobileNetV2, image classification, GAN, VAE, deep learning
Procedia PDF Downloads 1273124 The Effects of Physical Activity and Serotonin on Depression, Anxiety, Body Image and Mental Health
Authors: Sh. Khoshemehry, M. E. Bahram, M. J. Pourvaghar
Abstract:
Sport has found a special place as an effective phenomenon in all societies of the contemporary world. The relationship between physical activity and exercise with different sciences has provided new fields for human study. The range of issues related to exercise and physical education is such that it requires specialized sciences and special studies. In this article, the psychological and social sections of exercise have been investigated for children and adults. It can be used for anyone in different age groups. Exercise and regular physical movements have a great impact on the mental and social health of the individual in addition to body health. It affects the individual's adaptability in society and his/her personality. Exercise affects the treatment of diseases such as depression, anxiety, stress, body image, and memory. Exercise is a safe haven for young people to achieve the optimum human development in its shelter. The effects of sensorimotor skills on mental actions and mental development are such a way that many psychologists and sports science experts believe these activities should be included in training programs in the first place. Familiarity of students and scholars with different programs and methods of sensorimotor activities not only causes their mental actions; but also increases mental health and vitality, enhances self-confidence and, therefore, mental health.Keywords: anxiety, mental health, physical activity, serotonin
Procedia PDF Downloads 2093123 Operations Training Using Immersive Technologies: A Development Experience
Authors: A. Aman, S. M. Tang, F. H. Alharrassy
Abstract:
Omanisation was established to increase job opportunities for national employment in Sultanate of Oman. With half of the population below 25 years of age, the sultanate is striving to diversify the economy fast enough to meet the burgeoning number of jobseekers annually. On the other hand, training personnel to be competent oil and gas operators and technicians is a difficult task in a complex reservoir structures in Oman using highly advanced and sophisticated extracting processes. Coupled towards Omanisation which encourages nationals into the oil and gas sector so as to create sustainable employment for the local population, the challenge to churn out competent manpower became a daunting task. Immersive technologies provided the impetus to create a new digital media sector which provided job opportunities as well as the learning contents to enhance the competency-based training for the oil and gas sector in the Sultanate. This lead to a win-win-win collaboration amongst the government represented by the Information Technology Authority (ITA), private sector specialised company (represented by ASM Technologies), jobseekers and oil and gas organisations. This is also one of the first private-public partnership model in the Information Communication Technology (ICT) sector in Oman. A pilot phase was conducted for 8 months to develop four virtual applications for training in equipment and process engineering; oil rig familiarisation, Health Safety Environment (HSE) application, turbine application and the mechanical vapour compressor (MVC) water recycling plant in order to enhance the competency level of the trainees. The immersive applications were installed in operational settings which enabled new employees to practice and understand various processes and procedures regarding enhanced oil recovery. Existing employees used the application to review the working principles in order to carry out troubleshooting scenarios. Concurrently, these applications were also developed by local Omani resources within the country. This created job opportunities for job-seekers as well the establishment of a digital media sector. The purpose of this paper is to discuss how immersive technologies can enhance operational competencies, create job and establish a digital media sector in the Sultanate of Oman.Keywords: immersive, virtual reality, operations training, Omanisation
Procedia PDF Downloads 2333122 Comfort Sensor Using Fuzzy Logic and Arduino
Authors: Samuel John, S. Sharanya
Abstract:
Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.Keywords: arduino, DHT11, soft sensor, sugeno
Procedia PDF Downloads 3143121 Legal Considerations in Fashion Modeling: Protecting Models' Rights and Ensuring Ethical Practices
Authors: Fatemeh Noori
Abstract:
The fashion industry is a dynamic and ever-evolving realm that continuously shapes societal perceptions of beauty and style. Within this industry, fashion modeling plays a crucial role, acting as the visual representation of brands and designers. However, behind the glamorous façade lies a complex web of legal considerations that govern the rights, responsibilities, and ethical practices within the field. This paper aims to explore the legal landscape surrounding fashion modeling, shedding light on key issues such as contract law, intellectual property, labor rights, and the increasing importance of ethical considerations in the industry. Fashion modeling involves the collaboration of various stakeholders, including models, designers, agencies, and photographers. To ensure a fair and transparent working environment, it is imperative to establish a comprehensive legal framework that addresses the rights and obligations of each party involved. One of the primary legal considerations in fashion modeling is the contractual relationship between models and agencies. Contracts define the terms of engagement, including payment, working conditions, and the scope of services. This section will delve into the essential elements of modeling contracts, the negotiation process, and the importance of clarity to avoid disputes. Models are not just individuals showcasing clothing; they are integral to the creation and dissemination of artistic and commercial content. Intellectual property rights, including image rights and the use of a model's likeness, are critical aspects of the legal landscape. This section will explore the protection of models' image rights, the use of their likeness in advertising, and the potential for unauthorized use. Models, like any other professionals, are entitled to fair and ethical treatment. This section will address issues such as working conditions, hours, and the responsibility of agencies and designers to prioritize the well-being of models. Additionally, it will explore the global movement toward inclusivity, diversity, and the promotion of positive body image within the industry. The fashion industry has faced scrutiny for perpetuating harmful standards of beauty and fostering a culture of exploitation. This section will discuss the ethical responsibilities of all stakeholders, including the promotion of diversity, the prevention of exploitation, and the role of models as influencers for positive change. In conclusion, the legal considerations in fashion modeling are multifaceted, requiring a comprehensive approach to protect the rights of models and ensure ethical practices within the industry. By understanding and addressing these legal aspects, the fashion industry can create a more transparent, fair, and inclusive environment for all stakeholders involved in the art of modeling.Keywords: fashion modeling contracts, image rights in modeling, labor rights for models, ethical practices in fashion, diversity and inclusivity in modeling
Procedia PDF Downloads 783120 Colour Segmentation of Satellite Imagery to Estimate Total Suspended Solid at Rawa Pening Lake, Central Java, Indonesia
Authors: Yulia Chalri, E. T. P. Lussiana, Sarifuddin Madenda, Bambang Trisakti, Yuhilza Hanum
Abstract:
Water is a natural resource needed by humans and other living creatures. The territorial water of Indonesia is 81% of the country area, consisting of inland waters and the sea. The research object is inland waters in the form of lakes and reservoirs, since 90% of inland waters are in them, therefore the water quality should be monitored. One of water quality parameters is Total Suspended Solid (TSS). Most of the earlier research did direct measurement by taking the water sample to get TSS values. This method takes a long time and needs special tools, resulting in significant cost. Remote sensing technology has solved a lot of problems, such as the mapping of watershed and sedimentation, monitoring disaster area, mapping coastline change, and weather analysis. The aim of this research is to estimate TSS of Rawa Pening lake in Central Java by using the Lansat 8 image. The result shows that the proposed method successfully estimates the Rawa Pening’s TSS. In situ TSS shows normal water quality range, and so does estimation result of segmentation method.Keywords: total suspended solid (TSS), remote sensing, image segmentation, RGB value
Procedia PDF Downloads 4163119 Ecosystems: An Analysis of Generation Z News Consumption, Its Impact on Evolving Concepts and Applications in Journalism
Authors: Bethany Wood
Abstract:
The world pandemic led to a change in the way social media was used by audiences, with young people spending more hours on the platform due to lockdown. Reports by Ofcom have demonstrated that the internet is the second most popular platform for accessing news after television in the UK with social media and the internet ranked as the most popular platform to access news for those aged between 16-24. These statistics are unsurprising considering that at the time of writing, 98 percent of Generation Z (Gen Z) owned a smartphone and the subsequent ease and accessibility of social media. Technology is constantly developing and with this, its importance is becoming more prevalent with each generation: the Baby Boomers (1946-1964) consider it something useful whereas millennials (1981-1997) believe it a necessity for day to day living. Gen Z, otherwise known as the digital native, have grown up with this technology at their fingertips and social media is a norm. It helps form their identity, their affiliations and opens gateways for them to engage with news in a new way. It is a common misconception that Gen Z do not consume news, they are simply doing so in a different way to their predecessors. Using a sample of 800 18-20 year olds whilst utilising Generational theory, Actor Network Theory and the Social Shaping of Technology, this research provides a critical analyse regarding how Gen Z’s news consumption and engagement habits are developing along with technology to sculpture the future format of news and its distribution. From that perspective, allied with the empirical approach, it is possible to provide research orientated advice for the industry and even help to redefine traditional concepts of journalism.Keywords: journalism, generation z, digital, social media
Procedia PDF Downloads 863118 Aerial Survey and 3D Scanning Technology Applied to the Survey of Cultural Heritage of Su-Paiwan, an Aboriginal Settlement, Taiwan
Authors: April Hueimin Lu, Liangj-Ju Yao, Jun-Tin Lin, Susan Siru Liu
Abstract:
This paper discusses the application of aerial survey technology and 3D laser scanning technology in the surveying and mapping work of the settlements and slate houses of the old Taiwanese aborigines. The relics of old Taiwanese aborigines with thousands of history are widely distributed in the deep mountains of Taiwan, with a vast area and inconvenient transportation. When constructing the basic data of cultural assets, it is necessary to apply new technology to carry out efficient and accurate settlement mapping work. In this paper, taking the old Paiwan as an example, the aerial survey of the settlement of about 5 hectares and the 3D laser scanning of a slate house were carried out. The obtained orthophoto image was used as an important basis for drawing the settlement map. This 3D landscape data of topography and buildings derived from the aerial survey is important for subsequent preservation planning as well as building 3D scan provides a more detailed record of architectural forms and materials. The 3D settlement data from the aerial survey can be further applied to the 3D virtual model and animation of the settlement for virtual presentation. The information from the 3D scanning of the slate house can also be used for further digital archives and data queries through network resources. The results of this study show that, in large-scale settlement surveys, aerial surveying technology is used to construct the topography of settlements with buildings and spatial information of landscape, as well as the application of 3D scanning for small-scale records of individual buildings. This application of 3D technology, greatly increasing the efficiency and accuracy of survey and mapping work of aboriginal settlements, is much helpful for further preservation planning and rejuvenation of aboriginal cultural heritage.Keywords: aerial survey, 3D scanning, aboriginal settlement, settlement architecture cluster, ecological landscape area, old Paiwan settlements, slat house, photogrammetry, SfM, MVS), Point cloud, SIFT, DSM, 3D model
Procedia PDF Downloads 1733117 Impact of Sports and Entertainment Marketing Strategies on the Professional Practices of Sports Managers in Nigeria
Authors: Ibraheem Musa Oluwatoyin, Olawuni Adisa, Abdulraheem Yinusa Owolabi
Abstract:
Nigeria's sports industry has grown, but ineffective management, inadequate marketing, and limited stakeholder engagement hinder progress. Effective marketing strategies are crucial, yet empirical research on their impact on Nigerian sports managers is scarce. This study investigates the impact of sports and entertainment marketing strategies on the professional practices of sports managers in Nigeria, employing a quantitative research design grounded in the Theory of Planned Behavior. The target population comprises 1,108 sports managers across various organizations in Nigeria, with a stratified random sample of 301 participants, ensuring representativeness based on organizational type (sports commissions/councils) and geographical zones. Data was collected using a structured questionnaire, which included sections on demographic information, the evaluation of marketing strategies, and their impact on decision-making, operational efficiency, stakeholder engagement, and performance. The questionnaire items were adapted from validated scales in marketing and sports management literature, achieving a Cronbach’s alpha of 0.85, indicating high internal consistency. Data collection occurred over eight weeks through both online and face-to-face surveys, ensuring ethical compliance with informed consent and data anonymization. Descriptive and inferential statistical methods, including Pearson Product Moment Correlation (PPMC), were employed for data analysis. The PPMC analyses revealed statistically significant relationships between digital platform marketing (r = 0.63, p = 0.000), sports marketing experience (r = 0.51, p = 0.000), and producing engaging sports content (r = 0.61, p = 0.000) with professional practices. These results suggest that digital platform marketing, sports marketing experience, and the creation of engaging content significantly enhance the effectiveness and performance of sports managers in Nigeria. The study contributes valuable insights for stakeholders in Nigeria’s sports industry, providing actionable recommendations for improving sports management practices through strategic marketing approaches.Keywords: professional practice, digital platform, experience sports marketing, producing engaging sports content
Procedia PDF Downloads 33116 Signal Integrity Performance Analysis in Capacitive and Inductively Coupled Very Large Scale Integration Interconnect Models
Authors: Mudavath Raju, Bhaskar Gugulothu, B. Rajendra Naik
Abstract:
The rapid advances in Very Large Scale Integration (VLSI) technology has resulted in the reduction of minimum feature size to sub-quarter microns and switching time in tens of picoseconds or even less. As a result, the degradation of high-speed digital circuits due to signal integrity issues such as coupling effects, clock feedthrough, crosstalk noise and delay uncertainty noise. Crosstalk noise in VLSI interconnects is a major concern and reduction in VLSI interconnect has become more important for high-speed digital circuits. It is the most effectively considered in Deep Sub Micron (DSM) and Ultra Deep Sub Micron (UDSM) technology. Increasing spacing in-between aggressor and victim line is one of the technique to reduce the crosstalk. Guard trace or shield insertion in-between aggressor and victim is also one of the prominent options for the minimization of crosstalk. In this paper, far end crosstalk noise is estimated with mutual inductance and capacitance RLC interconnect model. Also investigated the extent of crosstalk in capacitive and inductively coupled interconnects to minimizes the same through shield insertion technique.Keywords: VLSI, interconnects, signal integrity, crosstalk, shield insertion, guard trace, deep sub micron
Procedia PDF Downloads 1873115 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome
Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler
Abstract:
Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model
Procedia PDF Downloads 1533114 Effects of the Americans with Disabilities Act on Disability Representation in Mid-Century American Media Discourse
Authors: Si On Na
Abstract:
The development of American radio and print media since World War II has allowed people with disabilities to engage more directly with the public, gradually changing the perception that disabled people constitute a kind of social impairment or burden. People with disabilities have rarely been portrayed as equal to the non-disabled. In the postwar period, a dramatic shift from eugenicist conceptualizations of disability and widespread institutionalization gradually evolved into conditions of greater openness in public discourse. This discourse was marked at mid-century by telethons and news media (both print and television) which sought to commodify people with disabilities for commercial gain through stories that promoted alienating forms of empowerment alternating with paternalistic pity. By comparing studies of the history of American disability advocacy in the twentieth century and the evolution of the image of disability characteristic of mid-century media discourse, this paper will examine the relationship between the passage of the American with Disabilities Act of 1990 (ADA) and the expanded media representation of people with disabilities. This paper will argue that the legal mandate of the ADA ultimately transformed the image of people with disabilities from those who are weak and in need of support to viable consumers, encouraging traditional American print, film, and television media outlets to solicit the agency of people with disabilities in the authentic portrayal of themselves and their disabilities.Keywords: ADA, disability representation, media portrayal, postwar United States
Procedia PDF Downloads 1823113 Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models
Authors: Aditya Karade, Sharada Falane, Dhananjay Deshmukh, Vijaykumar Mantri
Abstract:
Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%.Keywords: brain tumour, MRI image, detecting and classifying tumour, pre-trained models, transfer learning, image segmentation, data augmentation
Procedia PDF Downloads 753112 Acoustic Echo Cancellation Using Different Adaptive Algorithms
Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil
Abstract:
An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)
Procedia PDF Downloads 803111 TomoTherapy® System Repositioning Accuracy According to Treatment Localization
Authors: Veronica Sorgato, Jeremy Belhassen, Philippe Chartier, Roddy Sihanath, Nicolas Docquiere, Jean-Yves Giraud
Abstract:
We analyzed the image-guided radiotherapy method used by the TomoTherapy® System (Accuray Corp.) for patient repositioning in clinical routine. The TomoTherapy® System computes X, Y, Z and roll displacements to match the reference CT, on which the dosimetry has been performed, with the pre-treatment MV CT. The accuracy of the repositioning method has been studied according to the treatment localization. For this, a database of 18774 treatment sessions, performed during 2 consecutive years (2016-2017 period) has been used. The database includes the X, Y, Z and roll displacements proposed by TomoTherapy® System as well as the manual correction of these proposals applied by the radiation therapist. This manual correction aims to further improve the repositioning based on the clinical situation and depends on the structures surrounding the target tumor tissue. The statistical analysis performed on the database aims to define repositioning limits to be used as security and guiding tool for the manual adjustment implemented by the radiation therapist. This tool will participate not only to notify potential repositioning errors but also to further improve patient positioning for optimal treatment.Keywords: accuracy, IGRT MVCT, image-guided radiotherapy megavoltage computed tomography, statistical analysis, tomotherapy, localization
Procedia PDF Downloads 2263110 Copy Effect Myopic Anisometropia in a Pair of Monozygotic Twins: A Case Report
Authors: Fatma Sümer
Abstract:
Introduction: This case report aims to report myopic anisometropia with copy-image in monozygotic twins. Methods: In February 2021, a 6-year-old identical twin was seen, who was referred to us with the diagnosis of amblyopia in their left eye from an external center. Both twins had a full ophthalmic examination, which included visual acuity testing, ocular motility testing, cycloplegic refraction, and fundus examination. Results: On examination, “copy image” myopic anisometropia was discovered. Twin 1 had anisometropia with myopic astigmatism in the left eye. His cycloplegic refraction was +1.00 (-0.75x 75) in the right eye and -8.0 (-1.50x175) in the left eye. Similarly, twin 2 had anisometropia with myopic astigmatism in the left eye. His cycloplegic refraction was -7.75 (-1.50x180) in the left eye and +1.25 (-0.75x90 ) in the right eye. The best-corrected visual acuity was 20/60 in the amblyopic eyes and 20/20 in the unaffected eyes. There was no ocular deviation. In either patient, a slit-lamp microscopic examination revealed no abnormalities in the anterior parts of either eye. Fundoscopic examination revealed no abnormalities. No abnormal ocular movements were demonstrated. Conclusion: As far as we have reviewed in the literature, previous studies with twins were mostly concerned with mirror-effect myopic anisometropia and myopic anisometropia, whereas ipsilateral amblyopia and anisometropia were not reported in monozygotic twins. This case underscores the possible genetic basis of myopic anisometropia.Keywords: amblyopia, anisometropia, myopia, twins
Procedia PDF Downloads 1593109 Advancing Phenological Understanding of Plants/Trees Through Phenocam Digital Time-lapse Images
Authors: Siddhartha Khare, Suyash Khare
Abstract:
Phenology, a crucial discipline in ecology, offers insights into the seasonal dynamics of organisms within natural ecosystems and the underlying environmental triggers. Leveraging the potent capabilities of digital repeat photography, PhenoCams capture invaluable data on the phenology of crops, plants, and trees. These cameras yield digital imagery in Red Green Blue (RGB) color channels, and some advanced systems even incorporate Near Infrared (NIR) bands. This study presents compelling case studies employing PhenoCam technology to unravel the phenology of black spruce trees. Through the analysis of RGB color channels, a range of essential color metrics including red chromatic coordinate (RCC), green chromatic coordinate (GCC), blue chromatic coordinate (BCC), vegetation contrast index (VCI), and excess green index (ExGI) are derived. These metrics illuminate variations in canopy color across seasons, shedding light on bud and leaf development. This, in turn, facilitates a deeper understanding of phenological events and aids in delineating the growth periods of trees and plants. The initial phase of this study addresses critical questions surrounding the fidelity of continuous canopy greenness records in representing bud developmental phases. Additionally, it discerns which color-based index most accurately tracks the seasonal variations in tree phenology within evergreen forest ecosystems. The subsequent section of this study delves into the transition dates of black spruce (Picea mariana (Mill.) B.S.P.) phenology. This is achieved through a fortnightly comparative analysis of the MODIS normalized difference vegetation index (NDVI) and the enhanced vegetation index (EVI). By employing PhenoCam technology and leveraging advanced color metrics, this study significantly advances our comprehension of black spruce tree phenology, offering valuable insights for ecological research and management.Keywords: phenology, remote sensing, phenocam, color metrics, NDVI, GCC
Procedia PDF Downloads 623108 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 1403107 Fusion of Finger Inner Knuckle Print and Hand Geometry Features to Enhance the Performance of Biometric Verification System
Authors: M. L. Anitha, K. A. Radhakrishna Rao
Abstract:
With the advent of modern computing technology, there is an increased demand for developing recognition systems that have the capability of verifying the identity of individuals. Recognition systems are required by several civilian and commercial applications for providing access to secured resources. Traditional recognition systems which are based on physical identities are not sufficiently reliable to satisfy the security requirements due to the use of several advances of forgery and identity impersonation methods. Recognizing individuals based on his/her unique physiological characteristics known as biometric traits is a reliable technique, since these traits are not transferable and they cannot be stolen or lost. Since the performance of biometric based recognition system depends on the particular trait that is utilized, the present work proposes a fusion approach which combines Inner knuckle print (IKP) trait of the middle, ring and index fingers with the geometrical features of hand. The hand image captured from a digital camera is preprocessed to find finger IKP as region of interest (ROI) and hand geometry features. Geometrical features are represented as the distances between different key points and IKP features are extracted by applying local binary pattern descriptor on the IKP ROI. The decision level AND fusion was adopted, which has shown improvement in performance of the combined scheme. The proposed approach is tested on the database collected at our institute. Proposed approach is of significance since both hand geometry and IKP features can be extracted from the palm region of the hand. The fusion of these features yields a false acceptance rate of 0.75%, false rejection rate of 0.86% for verification tests conducted, which is less when compared to the results obtained using individual traits. The results obtained confirm the usefulness of proposed approach and suitability of the selected features for developing biometric based recognition system based on features from palmar region of hand.Keywords: biometrics, hand geometry features, inner knuckle print, recognition
Procedia PDF Downloads 2213106 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing
Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello
Abstract:
In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction
Procedia PDF Downloads 427