Search results for: cut redundant information in image
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12795

Search results for: cut redundant information in image

11745 Parallel Processing in near Absence of Attention: A Study Using Dual-Task Paradigm

Authors: Aarushi Agarwal, Tara Singh, I.L Singh, Anju Lata Singh, Trayambak Tiwari

Abstract:

Simple discrimination in near absence of attention has been widely observed. Dual-task studies with natural scenes studies have been claimed as being preattentive in nature that facilitated categorization simultaneously with the attentional demanding task. So in this study, multiple images at the periphery are presented, initiating parallel processing in near absence of attention. For the central demanding task rotated letters were presented in both conditions, while in periphery natural and animal images were presented. To understand the breakpoint of ability to perform in near absence of attention one, two and three peripheral images were presented simultaneously with central task and subjects had to respond when all belong to the same category. Individual participant performance did not show a significant difference in both conditions central and peripheral task when the single peripheral image was shown. In case of two images high-level parallel processing could take place with little attentional resources. The eye tracking results supports the evidence as no major saccade was made in a large number of trials. Three image presentations proved to be a breaking point of the capacities to perform outside attentional assistance as participants showed a confused eye gaze pattern which failed to make the natural and animal image discriminations. Thus, we can conclude attention and awareness being independent mechanisms having limited capacities.

Keywords: attention, dual task pardigm, parallel processing, break point, saccade

Procedia PDF Downloads 209
11744 Information Exchange Process Analysis between Authoring Design Tools and Lighting Simulation Tools

Authors: Rudan Xue, Annika Moscati, Rehel Zeleke Kebede, Peter Johansson

Abstract:

Successful buildings’ simulation and analysis inevitably require information exchange between multiple building information modeling (BIM) software. The BIM infor-mation exchange based on IFC is widely used. However, Industry Foundation Classifi-cation (IFC) files are not always reliable and information can get lost when using dif-ferent software for modeling and simulations. In this research, interviews with lighting simulation experts and a case study provided by a company producing lighting devices have been the research methods used to identify the necessary steps and data for suc-cessful information exchange between lighting simulation tools and authoring design tools. Model creation, information exchange, and model simulation have been identi-fied as key aspects for the success of information exchange. The paper concludes with recommendations for improved information exchange and more reliable simulations that take all the needed parameters into consideration.

Keywords: BIM, data exchange, interoperability issues, lighting simulations

Procedia PDF Downloads 223
11743 Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images

Authors: Abder-Rahman Ali, Adélaïde Albouy-Kissi, Manuel Grand-Brochier, Viviane Ladan-Marcus, Christine Hoeffl, Claude Marcus, Antoine Vacavant, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors.

Keywords: defuzzification, fuzzy clustering, image segmentation, type-II fuzzy sets

Procedia PDF Downloads 468
11742 Assessment and Analysis of Literary Criticism and Consumer Research

Authors: Mohammad Mirzaei

Abstract:

This article proposes literary criticism as a source of insight into consumer behavior, provides an extensive overview of literary criticism, provides concrete illustrative analysis, and offers suggestions for further research. To do, a literary analysis of advertising copy identifies elements that provide additional information to consumer researchers and discusses the contribution of literary criticism to consumer research. Important post-war critical schools of thought are reviewed, and relevant theoretical concepts are summarized. Ivory Flakes' advertisements are analyzed using a variety of concepts drawn from literary schools, primarily sociocultural and reader responses. Suggestions for further research on content analysis, image analysis, and consumption history are presented.

Keywords: consumer behaviour, consumer research, consumption history, criticism

Procedia PDF Downloads 86
11741 Seawater Changes' Estimation at Tidal Flat in Korean Peninsula Using Drone Stereo Images

Authors: Hyoseong Lee, Duk-jin Kim, Jaehong Oh, Jungil Shin

Abstract:

Tidal flat in Korean peninsula is one of the largest biodiversity tidal flats in the world. Therefore, digital elevation models (DEM) is continuously demanded to monitor of the tidal flat. In this study, DEM of tidal flat, according to different times, was produced by means of the Drone and commercial software in order to measure seawater change during high tide at water-channel in tidal flat. To correct the produced DEMs of the tidal flat where is inaccessible to collect control points, the DEM matching method was applied by using the reference DEM instead of the survey. After the ortho-image was made from the corrected DEM, the land cover classified image was produced. The changes of seawater amount according to the times were analyzed by using the classified images and DEMs. As a result, it was confirmed that the amount of water rapidly increased as the time passed during high tide.

Keywords: tidal flat, drone, DEM, seawater change

Procedia PDF Downloads 190
11740 Tinder, Image Merchandise and Desire: The Configuration of Social Ties in Today's Neoliberalism

Authors: Daniel Alvarado Valencia

Abstract:

Nowadays, the market offers us solutions for everything, creating the idea of an immediate availability of anything we could desire, and the Internet is the mean through which to obtain all this. The proposal of this conference is that this logic puts the subjects in a situation of self-exploitation, and considers the psyche as a productive force by configuring affection and desire from a neoliberal value perspective. It uses Tinder, starting from ethnographical data from Mexico City users, as an example for this. Tinder is an application created to get dates, have sexual encounters and find a partner. It works from the creation and management of a digital profile. It is an example of how futuristic and lonely the current era can be since we got used to interact with other people through screens and images. However, at the same time, it provides solutions to loneliness, since technology transgresses, invades and alters social practices in different ways. Tinder fits into this contemporary context, it is a concrete example of the processes of technification in which social bonds develop through certain devices offered by neoliberalism, through consumption, and where the search of love and courtship are possible through images and their consumption.

Keywords: desire, image, merchandise, neoliberalism

Procedia PDF Downloads 107
11739 Information Technology and Communications in Management of the Imperial Citadel of Thang Long-A World Heritage Site

Authors: Ngo the Bach

Abstract:

Information technology and communications are growing strongly and penetrated almost the entire Vietnamese economy and society. The article presents an overview of information technology and application communications in the management the Central Sector of the Imperial Citadel of Thang Long (Hanoi, Vietnam) - A World Heritage Site. The author also points out the opportunities and challenges of the information technology and communications in the sectors of culture and heritage; the use of information technology as an effective tool to develop mass and interactive communications. The article emphasizes on the advantage of information technology and communications in supporting effectively the management reform with respect to the Imperial Citadel of Thang Long in particular and the management of world heritage sites in Vietnam in general.

Keywords: information technology, communications, management, culture, heritage

Procedia PDF Downloads 318
11738 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm

Authors: Annalakshmi G., Sakthivel Murugan S.

Abstract:

This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.

Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization

Procedia PDF Downloads 150
11737 Landcover Mapping Using Lidar Data and Aerial Image and Soil Fertility Degradation Assessment for Rice Production Area in Quezon, Nueva Ecija, Philippines

Authors: Eliza. E. Camaso, Guiller. B. Damian, Miguelito. F. Isip, Ronaldo T. Alberto

Abstract:

Land-cover maps were important for many scientific, ecological and land management purposes and during the last decades, rapid decrease of soil fertility was observed to be due to land use practices such as rice cultivation. High-precision land-cover maps are not yet available in the area which is important in an economy management. To assure   accurate mapping of land cover to provide information, remote sensing is a very suitable tool to carry out this task and automatic land use and cover detection. The study did not only provide high precision land cover maps but it also provides estimates of rice production area that had undergone chemical degradation due to fertility decline. Land-cover were delineated and classified into pre-defined classes to achieve proper detection features. After generation of Land-cover map, of high intensity of rice cultivation, soil fertility degradation assessment in rice production area due to fertility decline was created to assess the impact of soils used in agricultural production. Using Simple spatial analysis functions and ArcGIS, the Land-cover map of Municipality of Quezon in Nueva Ecija, Philippines was overlaid to the fertility decline maps from Land Degradation Assessment Philippines- Bureau of Soils and Water Management (LADA-Philippines-BSWM) to determine the area of rice crops that were most likely where nitrogen, phosphorus, zinc and sulfur deficiencies were induced by high dosage of urea and imbalance N:P fertilization. The result found out that 80.00 % of fallow and 99.81% of rice production area has high soil fertility decline.

Keywords: aerial image, landcover, LiDAR, soil fertility degradation

Procedia PDF Downloads 244
11736 A Comparative Assessment of Industrial Composites Using Thermography and Ultrasound

Authors: Mosab Alrashed, Wei Xu, Stephen Abineri, Yifan Zhao, Jörn Mehnen

Abstract:

Thermographic inspection is a relatively new technique for Non-Destructive Testing (NDT) which has been gathering increasing interest due to its relatively low cost hardware and extremely fast data acquisition properties. This technique is especially promising in the area of rapid automated damage detection and quantification. In collaboration with a major industry partner from the aerospace sector advanced thermography-based NDT software for impact damaged composites is introduced. The software is based on correlation analysis of time-temperature profiles in combination with an image enhancement process. The prototype software is aiming to a) better visualise the damages in a relatively easy-to-use way and b) automatically and quantitatively measure the properties of the degradation. Knowing that degradation properties play an important role in the identification of degradation types, tests and results on specimens which were artificially damaged have been performed and analyzed.

Keywords: NDT, correlation analysis, image processing, damage, inspection

Procedia PDF Downloads 531
11735 Diagnostic Efficacy and Usefulness of Digital Breast Tomosynthesis (DBT) in Evaluation of Breast Microcalcifications as a Pre-Procedural Study for Stereotactic Biopsy

Authors: Okhee Woo, Hye Seon Shin

Abstract:

Purpose: To investigate the diagnostic power of digital breast tomosynthesis (DBT) in evaluation of breast microcalcifications and usefulness as a pre-procedural study for stereotactic biopsy in comparison with full-field digital mammogram (FFDM) and FFDM plus magnification image (FFDM+MAG). Methods and Materials: An IRB approved retrospective observer performance study on DBT, FFDM, and FFDM+MAG was done. Image quality was rated in 5-point scoring system for lesion clarity (1, very indistinct; 2, indistinct; 3, fair; 4, clear; 5, very clear) and compared by Wilcoxon test. Diagnostic power was compared by diagnostic values and AUC with 95% confidence interval. Additionally, procedural report of biopsy was analysed for patient positioning and adequacy of instruments. Results: DBT showed higher lesion clarity (median 5, interquartile range 4-5) than FFDM (3, 2-4, p-value < 0.0001), and no statistically significant difference to FFDM+MAG (4, 4-5, p-value=0.3345). Diagnostic sensitivity and specificity of DBT were 86.4% and 92.5%; FFDM 70.4% and 66.7%; FFDM+MAG 93.8% and 89.6%. The AUCs of DBT (0.88) and FFDM+MAG (0.89) were larger than FFDM (0.59, p-values < 0.0001) but there was no statistically significant difference between DBT and FFDM+MAG (p-value=0.878). In 2 cases with DBT, petit needle could be appropriately prepared; and other 3 without DBT, patient repositioning was needed. Conclusion: DBT showed better image quality and diagnostic values than FFDM and equivalent to FFDM+MAG in the evaluation of breast microcalcifications. Evaluation with DBT as a pre-procedural study for breast stereotactic biopsy can lead to more accurate localization and successful biopsy and also waive the need for additional magnification images.

Keywords: DBT, breast cancer, stereotactic biopsy, mammography

Procedia PDF Downloads 290
11734 Investigating the Impact of Super Bowl Participation on Local Economy: A Perspective of Stock Market

Authors: Rui Du

Abstract:

This paper attempts to assess the impact of a major sporting event —the Super Bowl on the local economies. The identification strategy is to compare the winning and losing cities at the National Football League (NFL) conference finals under the assumption of similar pre-treatment trends. The stock market performances of companies headquartered in these cities are used to capture the sudden changes in local economic activities during a short time span. The exogenous variations in the football game outcome allow a straightforward difference-in-differences approach to identify the effect. This study finds that the post-event trends in winning and losing cities diverge despite the fact that both cities have economically and statistically similar pre-event trends. Empirical analysis provides suggestive evidence of a positive, significant local economic impact of conference final wins, possibly through city image enhancement. Further empirical evidence shows the presence of heterogeneous effects across industrial sectors, suggesting that city image enhancing the effect of the Super Bowl participation is empirically relevant for the changes in the composition of local industries. Also, this study also adopts a similar strategy to examine the local economic impact of Super Bowl successes, however, finds no statistically significant effect.

Keywords: Super Bowl Participation, local economies, city image enhancement, difference-in-di fferences, industrial sectors

Procedia PDF Downloads 227
11733 Liver Tumor Detection by Classification through FD Enhancement of CT Image

Authors: N. Ghatwary, A. Ahmed, H. Jalab

Abstract:

In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.

Keywords: fractional differential (FD), computed tomography (CT), fusion, aplha, texture features.

Procedia PDF Downloads 346
11732 Dogs Chest Homogeneous Phantom for Image Optimization

Authors: Maris Eugênia Dela Rosa, Ana Luiza Menegatti Pavan, Marcela De Oliveira, Diana Rodrigues De Pina, Luis Carlos Vulcano

Abstract:

In medical veterinary as well as in human medicine, radiological study is essential for a safe diagnosis in clinical practice. Thus, the quality of radiographic image is crucial. In last year’s there has been an increasing substitution of image acquisition screen-film systems for computed radiology equipment (CR) without technical charts adequacy. Furthermore, to carry out a radiographic examination in veterinary patient is required human assistance for restraint this, which can compromise image quality by generating dose increasing to the animal, for Occupationally Exposed and also the increased cost to the institution. The image optimization procedure and construction of radiographic techniques are performed with the use of homogeneous phantoms. In this study, we sought to develop a homogeneous phantom of canine chest to be applied to the optimization of these images for the CR system. In carrying out the simulator was created a database with retrospectives chest images of computed tomography (CT) of the Veterinary Hospital of the Faculty of Veterinary Medicine and Animal Science - UNESP (FMVZ / Botucatu). Images were divided into four groups according to the animal weight employing classification by sizes proposed by Hoskins & Goldston. The thickness of biological tissues were quantified in a 80 animals, separated in groups of 20 animals according to their weights: (S) Small - equal to or less than 9.0 kg, (M) Medium - between 9.0 and 23.0 kg, (L) Large – between 23.1 and 40.0kg and (G) Giant – over 40.1 kg. Mean weight for group (S) was 6.5±2.0 kg, (M) 15.0±5.0 kg, (L) 32.0±5.5 kg and (G) 50.0 ±12.0 kg. An algorithm was developed in Matlab in order to classify and quantify biological tissues present in CT images and convert them in simulator materials. To classify tissues presents, the membership functions were created from the retrospective CT scans according to the type of tissue (adipose, muscle, bone trabecular or cortical and lung tissue). After conversion of the biologic tissue thickness in equivalent material thicknesses (acrylic simulating soft tissues, bone tissues simulated by aluminum and air to the lung) were obtained four different homogeneous phantoms, with (S) 5 cm of acrylic, 0,14 cm of aluminum and 1,8 cm of air; (M) 8,7 cm of acrylic, 0,2 cm of aluminum and 2,4 cm of air; (L) 10,6 cm of acrylic, 0,27 cm of aluminum and 3,1 cm of air and (G) 14,8 cm of acrylic, 0,33 cm of aluminum and 3,8 cm of air. The developed canine homogeneous phantom is a practical tool, which will be employed in future, works to optimize veterinary X-ray procedures.

Keywords: radiation protection, phantom, veterinary radiology, computed radiography

Procedia PDF Downloads 409
11731 Optimized Road Lane Detection Through a Combined Canny Edge Detection, Hough Transform, and Scaleable Region Masking Toward Autonomous Driving

Authors: Samane Sharifi Monfared, Lavdie Rada

Abstract:

Nowadays, autonomous vehicles are developing rapidly toward facilitating human car driving. One of the main issues is road lane detection for a suitable guidance direction and car accident prevention. This paper aims to improve and optimize road line detection based on a combination of camera calibration, the Hough transform, and Canny edge detection. The video processing is implemented using the Open CV library with the novelty of having a scale able region masking. The aim of the study is to introduce automatic road lane detection techniques with the user’s minimum manual intervention.

Keywords: hough transform, canny edge detection, optimisation, scaleable masking, camera calibration, improving the quality of image, image processing, video processing

Procedia PDF Downloads 79
11730 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 253
11729 The Investigation of Relationship between Accounting Information and the Value of Companies

Authors: Golamhassan Ghahramani Aghdam, Pedram Bavili Tabrizi

Abstract:

The aim of this research is to investigate the relationship between accounting information and the value of the companies accepted in Tehran Exchange Market. The dependent variable in this research is the value of a company that is measured by price coefficients, and the independent variables are balance sheet information, profit and loss information, cash flow state information, and profit quality characteristics. The profit quality characteristic index is to be related and to be on-time. This research is an application research, and the research population includes all companies that are active in Tehran exchange market. The number of 194 companies was selected by the systematic method as the statistics sample in the period of 2018-2019. The multi-variable linear regression model was used for the hypotheses test. The results show that there is no relationship between accounting information and companies’ value (stock value) that can be due to the lack of efficiency of the investment market and the inability to use the accounting information by investment market activists.

Keywords: accounting information, company value, profit quality characteristics, price coefficient

Procedia PDF Downloads 124
11728 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models

Authors: Ainouna Bouziane

Abstract:

The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.

Keywords: electron tomography, supported catalysts, nanometrology, error assessment

Procedia PDF Downloads 70
11727 Alignment of Information System Strategy and Green Information System Strategy: Comprehension and A Review of the Literature

Authors: Wartika Memed Purawinata, Kridanto Surendro, Husni Sastramiharja, Iping Supriana S.

Abstract:

The information system is one of the contributors to environmental degradation and pollution are known to be released, such as the increasing of use of IT equipment and energy consumption , life cycles of IT equipment are getting shorter, IT equipment waste disposal and so on, therefore the information system should have a role in related environmental issues. Organization need to develop the ability of green to minimize negative impacts on the environment. Although the green information system is an important topic, many organizations fail to manage the environment in a way that is adequate because they ignore aspect of strategy. Alignment strategy is very important to ensure that all people do the activities of the organization headed in the same direction. Alignment strategy helps organization, determine which is more important for organization, and then make road mad to achieve the organization goal. Therefore, this paper discusses the review of the alignment, information systems strategy, and IS green strategy. With this discussion is expected there is an understanding about the alignment of information systems strategy and strategy of green IS, and its relationship with the achievement of business goals that have commitment to reduce the negative impact of information systems on the environment.

Keywords: alignment, strategy, information system, green

Procedia PDF Downloads 441
11726 Using Technology to Enhance the Student Assessment Experience

Authors: Asim Qayyum, David Smith

Abstract:

The use of information tools is a common activity for students of any educational stage when they encounter online learning activities. Finding the relevant information for particular learning tasks is the topic of this paper as it investigates the use of information tools for a group of student participants. The paper describes and discusses the results with particular implications for use in higher education, and the findings suggest that improvement in assessment design and subsequent student learning may be achieved by structuring the purposefulness of information tools usage and online reading behaviors of university students.

Keywords: information tools, assessment, online learning, student assessment experience

Procedia PDF Downloads 542
11725 An Inviscid Compressible Flow Solver Based on Unstructured OpenFOAM Mesh Format

Authors: Utkan Caliskan

Abstract:

Two types of numerical codes based on finite volume method are developed in order to solve compressible Euler equations to simulate the flow through forward facing step channel. Both algorithms have AUSM+- up (Advection Upstream Splitting Method) scheme for flux splitting and two-stage Runge-Kutta scheme for time stepping. In this study, the flux calculations differentiate between the algorithm based on OpenFOAM mesh format which is called 'face-based' algorithm and the basic algorithm which is called 'element-based' algorithm. The face-based algorithm avoids redundant flux computations and also is more flexible with hybrid grids. Moreover, some of OpenFOAM’s preprocessing utilities can be used on the mesh. Parallelization of the face based algorithm for which atomic operations are needed due to the shared memory model, is also presented. For several mesh sizes, 2.13x speed up is obtained with face-based approach over the element-based approach.

Keywords: cell centered finite volume method, compressible Euler equations, OpenFOAM mesh format, OpenMP

Procedia PDF Downloads 307
11724 Assessment of Sleep Disorders in Moroccan Women with Gynecological Cancer: Cross-Sectional Study

Authors: Amina Aquil, Abdeljalil El Got

Abstract:

Background: Sleep quality is one of the most important indicators related to the quality of life of patients suffering from cancer. Many factors could affect this quality of sleep and then be considered as associated predictors. Methods: The aim of this study was to assess the prevalence of sleep disorders and the associated factors with impaired sleep quality in Moroccan women with gynecological cancer. A cross-sectional study was carried out within the oncology department of the Ibn Rochd University Hospital, Casablanca, on Moroccan women who had undergone radical surgery for gynecological cancer (n=100). Translated and validated Arabic versions of the following international scales were used: Pittsburgh sleep quality index (PSQI), Hospital Anxiety and Depression Scale (HADS), Rosenberg's self-esteem scale (RSES), and Body image scale (BIS). Results: 78% of participants were considered poor sleepers. Most of the patients exhibited very poor subjective quality, low sleep latency, a short period of sleep, and a low rate of usual sleep efficiency. The vast majority of these patients were in poor shape during the day and did not use sleep medication. Waking up in the middle of the night or early in the morning and getting up to use the bathroom were the main reasons for poor sleep quality. PSQI scores were positively correlated with anxiety, depression, body image dissatisfaction, and lower self-esteem (p < 0.001). Conclusion: Sleep quality and its predictors require a systematic evaluation and adequate management to prevent sleep disturbances and mental distress as well as to improve the quality of life of these patients.

Keywords: body image, gynecological cancer, self esteem, sleep quality

Procedia PDF Downloads 107
11723 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 47
11722 Intelligent Grading System of Apple Using Neural Network Arbitration

Authors: Ebenezer Obaloluwa Olaniyi

Abstract:

In this paper, an intelligent system has been designed to grade apple based on either its defective or healthy for production in food processing. This paper is segmented into two different phase. In the first phase, the image processing techniques were employed to extract the necessary features required in the apple. These techniques include grayscale conversion, segmentation where a threshold value is chosen to separate the foreground of the images from the background. Then edge detection was also employed to bring out the features in the images. These extracted features were then fed into the neural network in the second phase of the paper. The second phase is a classification phase where neural network employed to classify the defective apple from the healthy apple. In this phase, the network was trained with back propagation and tested with feed forward network. The recognition rate obtained from our system shows that our system is more accurate and faster as compared with previous work.

Keywords: image processing, neural network, apple, intelligent system

Procedia PDF Downloads 387
11721 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 141
11720 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI

Procedia PDF Downloads 138
11719 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 186
11718 A Preliminary Study of Urban Resident Space Redundancy in the Context of Rapid Urbanization: Based on Urban Research of Hongkou District of Shanghai

Authors: Ziwei Chen, Yujiang Gao

Abstract:

The rapid urbanization has caused the massive physical space in Chinese cities to be in a state of duplication and dislocation through the rapid development, forming many daily spaces that cannot be standardized, typed, and identified, such as illegal construction. This phenomenon is known as urban spatial redundancy and is often excluded from mainstream architectural discussions because of its 'remaining' and 'excessive' derogatory label. In recent years, some practice architects have begun to pay attention to this phenomenon and tried to tap the value behind it. In this context, the author takes the redundancy phenomenon of resident space as the research object and explores the inspiration to the urban architectural renewal and the innovative residential area model, based on the urban survey of redundant living space in Hongkou District of Shanghai. On this basis, it shows that the changes accumulated in the long-term use of the building can be re-applied to the goals before the design, which is an important link and significance of the existence of an architecture.

Keywords: rapid urbanization, living space redundancy, architectural renewal, residential area model

Procedia PDF Downloads 125
11717 Digital Watermarking Based on Visual Cryptography and Histogram

Authors: R. Rama Kishore, Sunesh

Abstract:

Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.

Keywords: digital watermarking, visual cryptography, histogram, butter worth filter

Procedia PDF Downloads 346
11716 Algorithm for Information Retrieval Optimization

Authors: Kehinde K. Agbele, Kehinde Daniel Aruleba, Eniafe F. Ayetiran

Abstract:

When using Information Retrieval Systems (IRS), users often present search queries made of ad-hoc keywords. It is then up to the IRS to obtain a precise representation of the user’s information need and the context of the information. This paper investigates optimization of IRS to individual information needs in order of relevance. The study addressed development of algorithms that optimize the ranking of documents retrieved from IRS. This study discusses and describes a Document Ranking Optimization (DROPT) algorithm for information retrieval (IR) in an Internet-based or designated databases environment. Conversely, as the volume of information available online and in designated databases is growing continuously, ranking algorithms can play a major role in the context of search results. In this paper, a DROPT technique for documents retrieved from a corpus is developed with respect to document index keywords and the query vectors. This is based on calculating the weight (

Keywords: information retrieval, document relevance, performance measures, personalization

Procedia PDF Downloads 231