Search results for: images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2324

Search results for: images

254 Significant Factor of Magnetic Resonance for Survival Outcome in Rectal Cancer Patients Following Neoadjuvant Combined Chemotherapy and Radiation Therapy: Stratification of Lateral Pelvic Lymph Node

Authors: Min Ju Kim, Beom Jin Park, Deuk Jae Sung, Na Yeon Han, Kichoon Sim

Abstract:

Purpose: The purpose of this study is to determine the significant magnetic resonance (MR) imaging factors of lateral pelvic lymph node (LPLN) on the assessment of survival outcomes of neoadjuvant combined chemotherapy and radiation therapy (CRT) in patients with mid/low rectal cancer. Materials and Methods: The institutional review board approved this retrospective study of 63 patients with mid/low rectal cancer who underwent MR before and after CRT and patient consent was not required. Surgery performed within 4 weeks after CRT. The location of LPLNs was divided into following four groups; 1) common iliac, 2) external iliac, 3) obturator, and 4) internal iliac lymph nodes. The short and long axis diameters, numbers, shape (ovoid vs round), signal intensity (homogenous vs heterogenous), margin (smooth vs irregular), and diffusion-weighted restriction of LPLN were analyzed on pre- and post-CRT images. For treatment response using size, lymph node groups were defined as group 1) short axis diameter ≤ 5mm on both MR, group 2) > 5mm change into ≤ 5mm after CRT, and group 3) persistent size > 5mm before and after CRT. Clinical findings were also evaluated. The disease-free survival and overall survival rate were evaluated and the risk factors for survival outcomes were analyzed using cox regression analysis. Results: Patients in the group 3 (persistent size >5mm) showed significantly lower survival rates than the group 1 and 2 (Disease-free survival rates of 36.1% and 78.8, 88.8%, p < 0.001). The size response (group 1-3), multiplicity of LPLN, the level of carcinoembryonic antigen (CEA), patient’s age, T and N stage, vessel invasion, perineural invasion were significant factors affecting disease-free survival rate or overall survival rate using univariate analysis (p < 0.05). The persistent size (group 3) and multiplicity of LPLN were independent risk factors among MR imaging features influencing disease-free survival rate (HR = 10.087, p < 0.05; HR = 4.808, p < 0.05). Perineural invasion and T stage were shown as independent histologic risk factors (HR = 16.594, p < 0.05; HR = 15.891, p < 0.05). Conclusion: The persistent size greater than 5mm and multiplicity of LPLN on both pre- and post-MR after CRT were significant MR factors affecting survival outcomes in the patients with mid/low rectal cancer.

Keywords: rectal cancer, MRI, lymph node, combined chemoradiotherapy

Procedia PDF Downloads 113
253 Investigations into the in situ Enterococcus faecalis Biofilm Removal Efficacies of Passive and Active Sodium Hypochlorite Irrigant Delivered into Lateral Canal of a Simulated Root Canal Model

Authors: Saifalarab A. Mohmmed, Morgana E. Vianna, Jonathan C. Knowles

Abstract:

The issue of apical periodontitis has received considerable critical attention. Bacteria is integrated into communities, attached to surfaces and consequently form biofilm. The biofilm structure provides bacteria with a series protection skills against, antimicrobial agents and enhances pathogenicity (e.g. apical periodontitis). Sodium hypochlorite (NaOCl) has become the irrigant of choice for elimination of bacteria from the root canal system based on its antimicrobial findings. The aim of the study was to investigate the effect of different agitation techniques on the efficacy of 2.5% NaOCl to eliminate the biofilm from the surface of the lateral canal using the residual biofilm, and removal rate of biofilm as outcome measures. The effect of canal complexity (lateral canal) on the efficacy of the irrigation procedure was also assessed. Forty root canal models (n = 10 per group) were manufactured using 3D printing and resin materials. Each model consisted of two halves of an 18 mm length root canal with apical size 30 and taper 0.06, and a lateral canal of 3 mm length, 0.3 mm diameter located at 3 mm from the apical terminus. E. faecalis biofilms were grown on the apical 3 mm and lateral canal of the models for 10 days in Brain Heart Infusion broth. Biofilms were stained using crystal violet for visualisation. The model halves were reassembled, attached to an apparatus and tested under a fluorescence microscope. Syringe and needle irrigation protocol was performed using 9 mL of 2.5% NaOCl irrigant for 60 seconds. The irrigant was either left stagnant in the canal or activated for 30 seconds using manual (gutta-percha), sonic and ultrasonic methods. Images were then captured every second using an external camera. The percentages of residual biofilm were measured using image analysis software. The data were analysed using generalised linear mixed models. The greatest removal was associated with the ultrasonic group (66.76%) followed by sonic (45.49%), manual (43.97%), and passive irrigation group (control) (38.67%) respectively. No marked reduction in the efficiency of NaOCl to remove biofilm was found between the simple and complex anatomy models (p = 0.098). The removal efficacy of NaOCl on the biofilm was limited to the 1 mm level of the lateral canal. The agitation of NaOCl results in better penetration of the irrigant into the lateral canals. Ultrasonic agitation of NaOCl improved the removal of bacterial biofilm.

Keywords: 3D printing, biofilm, root canal irrigation, sodium hypochlorite

Procedia PDF Downloads 206
252 A Geometric Based Hybrid Approach for Facial Feature Localization

Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik

Abstract:

Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.

Keywords: biometrics, face recognition, facial landmarks, image processing

Procedia PDF Downloads 380
251 Unmanned Aerial System Development for the Remote Reflectance Sensing Using Above-Water Radiometers

Authors: Sunghun Jung, Wonkook Kim

Abstract:

Due to the difficulty of the utilization of satellite and an aircraft, conventional ocean color remote sensing has a disadvantage in that it is difficult to obtain images of desired places at desired times. These disadvantages make it difficult to capture the anomalies such as the occurrence of the red tide which requires immediate observation. It is also difficult to understand the phenomena such as the resuspension-precipitation process of suspended solids and the spread of low-salinity water originating in the coastal areas. For the remote sensing reflectance of seawater, above-water radiometers (AWR) have been used either by carrying portable AWRs on a ship or installing those at fixed observation points on the Ieodo ocean research station, Socheongcho base, and etc. In particular, however, it requires the high cost to measure the remote reflectance in various seawater environments at various times and it is even not possible to measure it at the desired frequency in the desired sea area at the desired time. Also, in case of the stationary observation, it is advantageous that observation data is continuously obtained, but there is the disadvantage that data of various sea areas cannot be obtained. It is possible to instantly capture various marine phenomena occurring on the coast using the unmanned aerial system (UAS) including vertical takeoff and landing (VTOL) type unmanned aerial vehicles (UAV) since it could move and hover at the one location and acquire data of the desired form at a high resolution. To remotely estimate seawater constituents, it is necessary to install an ultra-spectral sensor. Also, to calculate reflected light from the surface of the sea in consideration of the sun’s incident light, a total of three sensors need to be installed on the UAV. The remote sensing reflectance of seawater is the most basic optical property for remotely estimating color components in seawater and we could remotely estimate the chlorophyll concentration, the suspended solids concentration, and the dissolved organic amount. Estimating seawater physics from the remote sensing reflectance requires the algorithm development using the accumulation data of seawater reflectivity under various seawater and atmospheric conditions. The UAS with three AWRs is developed for the remote reflection sensing on the surface of the sea. Throughout the paper, we explain the details of each UAS component, system operation scenarios, and simulation and experiment results. The UAS consists of a UAV, a solar tracker, a transmitter, a ground control station (GCS), three AWRs, and two gimbals.

Keywords: above-water radiometers (AWR), ground control station (GCS), unmanned aerial system (UAS), unmanned aerial vehicle (UAV)

Procedia PDF Downloads 141
250 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 280
249 Detection, Isolation, and Raman Spectroscopic Characterization of Acute and Chronic Staphylococcus aureus Infection in an Endothelial Cell Culture Model

Authors: Astrid Tannert, Anuradha Ramoji, Christina Ebert, Frederike Gladigau, Lorena Tuchscherr, Jürgen Popp, Ute Neugebauer

Abstract:

Staphylococcus aureus is a facultative intracellular pathogen, which by entering host cells may evade immunologic host response as well as antimicrobial treatment. In that way, S. aureus can cause persistent intracellular infections which are difficult to treat. Depending on the strain, S. aureus may persist at different intracellular locations like the phagolysosome. The first barrier invading pathogens from the blood stream that they have to cross are the endothelial cells lining the inner surface of blood and lymphatic vessels. Upon proceeding from an acute to a chronic infection, intracellular pathogens undergo certain biochemical and structural changes including a deceleration of metabolic processes to adopt for long-term intracellular survival and the development of a special phenotype designated as small colony variant. In this study, the endothelial cell line Ea.hy 926 was used as a model for acute and chronic S. aureus infection. To this end, Ea.hy 926 cells were cultured on QIAscout™ Microraft Arrays, a special graded cell culture substrate that contains around 12,000 microrafts of 200 µm edge length. After attachment to the substrate, the endothelial cells were infected with GFP-expressing S. aureus for 3 weeks. The acute infection and the development of persistent bacteria was followed by confocal laser scanning microscopy, scanning the whole Microraft Array for the presence and for detailed determination of the intracellular location of fluorescent intracellular bacteria every second day. After three weeks of infection representative microrafts containing infected cells, cells with protruded infections and cells that did never show any infection were isolated and fixed for Raman micro-spectroscopic investigation. For comparison, also microrafts with acute infection were isolated. The acquired Raman spectra are correlated with the fluorescence microscopic images to give hints about a) the molecular alterations in endothelial cells during acute and chronic infection compared to non-infected cells, and b) metabolic and structural changes within the pathogen when entering a mode of persistence within host cells. We thank Dr. Ruth Kläver from QIAGEN GmbH for her support regarding QIAscout technology. Financial support by the BMBF via the CSCC (FKZ 01EO1502) and from the DFG via the Jena Biophotonic and Imaging Laboratory (JBIL, FKZ PO 633/29-1, BA 1601/10-1) is highly acknowledged.

Keywords: correlative image analysis, intracellular infection, pathogen-host adaption, Raman micro-spectroscopy

Procedia PDF Downloads 154
248 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 113
247 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 179
246 Introducing Principles of Land Surveying by Assigning a Practical Project

Authors: Introducing Principles of Land Surveying by Assigning a Practical Project

Abstract:

A practical project is used in an engineering surveying course to expose sophomore and junior civil engineering students to several important issues related to the use of basic principles of land surveying. The project, which is the design of a two-lane rural highway to connect between two arbitrary points, requires students to draw the profile of the proposed highway along with the existing ground level. Areas of all cross-sections are then computed to enable quantity computations between them. Lastly, Mass-Haul Diagram is drawn with all important parts and features shown on it for clarity. At the beginning, students faced challenges getting started on the project. They had to spend time and effort thinking of the best way to proceed and how the work would flow. It was even more challenging when they had to visualize images of cut, fill and mixed cross sections in three dimensions before they can draw them to complete the necessary computations. These difficulties were then somewhat overcome with the help of the instructor and thorough discussions among team members and/or between different teams. The method of assessment used in this study was a well-prepared-end-of-semester questionnaire distributed to students after the completion of the project and the final exam. The survey contained a wide spectrum of questions from students' learning experience when this course development was implemented to students' satisfaction of the class instructions provided to them and the instructor's competency in presenting the material and helping with the project. It also covered the adequacy of the project to show a sample of a real-life civil engineering application and if there is any excitement added by implementing this idea. At the end of the questionnaire, students had the chance to provide their constructive comments and suggestions for future improvements of the land surveying course. Outcomes will be presented graphically and in a tabular format. Graphs provide visual explanation of the results and tables, on the other hand, summarize numerical values for each student along with some descriptive statistics, such as the mean, standard deviation, and coefficient of variation for each student and each question as well. In addition to gaining experience in teamwork, communications, and customer relations, students felt the benefit of assigning such a project. They noticed the beauty of the practical side of civil engineering work and how theories are utilized in real-life engineering applications. It was even recommended by students that such a project be exercised every time this course is offered so future students can have the same learning opportunity they had.

Keywords: land surveying, highway project, assessment, evaluation, descriptive statistics

Procedia PDF Downloads 192
245 Substitutional Inference in Poetry: Word Choice Substitutions Craft Multiple Meanings by Inference

Authors: J. Marie Hicks

Abstract:

The art of the poetic conjoins meaning and symbolism with imagery and rhythm. Perhaps the reader might read this opening sentence as 'The art of the poetic combines meaning and symbolism with imagery and rhythm,' which holds a similar message, but is not quite the same. The reader understands that these factors are combined in this literary form, but to gain a sense of the conjoining of these factors, the reader is forced to consider that these aspects of poetry are not simply combined, but actually adjoin, abut, skirt, or touch in the poetic form. This alternative word choice is an example of substitutional inference. Poetry is, ostensibly, a literary form where language is used precisely or creatively to evoke specific images or emotions for the reader. Often, the reader can predict a coming rhyme or descriptive word choice in a poem, based on previous rhyming pattern or earlier imagery in the poem. However, there are instances when the poet uses an unexpected word choice to create multiple meanings and connections. In these cases, the reader is presented with an unusual phrase or image, requiring that they think about what that image is meant to suggest, and their mind also suggests the word they expected, creating a second, overlying image or meaning. This is what is meant by the term 'substitutional inference.' This is different than simply using a double entendre, a word or phrase that has two meanings, often one complementary and the other disparaging, or one that is innocuous and the other suggestive. In substitutional inference, the poet utilizes an unanticipated word that is either visually or phonetically similar to the expected word, provoking the reader to work to understand the poetic phrase as written, while unconsciously incorporating the meaning of the line as anticipated. In other words, by virtue of a word substitution, an inference of the logical word choice is imparted to the reader, while they are seeking to rationalize the word that was actually used. There is a substitutional inference of meaning created by the alternate word choice. For example, Louise Bogan, 4th Poet Laureate of the United States, used substitutional inference in the form of homonyms, malapropisms, and other unusual word choices in a number of her poems, lending depth and greater complexity, while actively engaging her readers intellectually with her poetry. Substitutional inference not only adds complexity to the potential interpretations of Bogan’s poetry, as well as the poetry of others, but provided a method for writers to infuse additional meanings into their work, thus expressing more information in a compact format. Additionally, this nuancing enriches the poetic experience for the reader, who can enjoy the poem superficially as written, or on a deeper level exploring gradations of meaning.

Keywords: poetic inference, poetic word play, substitutional inference, word substitution

Procedia PDF Downloads 201
244 Analyzing Political Cartoons in Arabic-Language Media after Trump's Jerusalem Move: A Multimodal Discourse Perspective

Authors: Inas Hussein

Abstract:

Communication in the modern world is increasingly becoming multimodal due to globalization and the digital space we live in which have remarkably affected how people communicate. Accordingly, Multimodal Discourse Analysis (MDA) is an emerging paradigm in discourse studies with the underlying assumption that other semiotic resources such as images, colours, scientific symbolism, gestures, actions, music and sound, etc. combine with language in order to  communicate meaning. One of the effective multimodal media that combines both verbal and non-verbal elements to create meaning is political cartoons. Furthermore, since political and social issues are mirrored in political cartoons, these are regarded as potential objects of discourse analysis since they not only reflect the thoughts of the public but they also have the power to influence them. The aim of this paper is to analyze some selected cartoons on the recognition of Jerusalem as Israel's capital by the American President, Donald Trump, adopting a multimodal approach. More specifically, the present research examines how the various semiotic tools and resources utilized by the cartoonists function in projecting the intended meaning. Ten political cartoons, among a surge of editorial cartoons highlighted by the Anti-Defamation League (ADL) - an international Jewish non-governmental organization based in the United States - as publications in different Arabic-language newspapers in Egypt, Saudi Arabia, UAE, Oman, Iran and UK, were purposively selected for semiotic analysis. These editorial cartoons, all published during 6th–18th December 2017, invariably suggest one theme: Jewish and Israeli domination of the United States. The data were analyzed using the framework of Visual Social Semiotics. In accordance with this methodological framework, the selected visual compositions were analyzed in terms of three aspects of meaning: representational, interactive and compositional. In analyzing the selected cartoons, an interpretative approach is being adopted. This approach prioritizes depth to breadth and enables insightful analyses of the chosen cartoons. The findings of the study reveal that semiotic resources are key elements of political cartoons due to the inherent political communication they convey. It is proved that adequate interpretation of the three aspects of meaning is a prerequisite for understanding the intended meaning of political cartoons. It is recommended that further research should be conducted to provide more insightful analyses of political cartoons from a multimodal perspective.

Keywords: Multimodal Discourse Analysis (MDA), multimodal text, political cartoons, visual modality

Procedia PDF Downloads 203
243 Media Framing of Media Regulators in Ghana: A Content Analysis of Selected News Articles on Four Ghanaian Online Newspapers

Authors: Elizabeth Owusu Asiamah

Abstract:

The Ghanaian news media play a crucial role in shaping people's thinking patterns through the nature of the coverage they give to issues, events and personalities. Since the media do not work in a vacuum but within a broader spectrum, which is society, whatever stories they cover and the nature of frames used to narrate such stories go a long way to influence how citizens perceive issues in the country. Consequently, the National Media Commission and the National Communications Authority were instituted to monitor and direct the activities of the media to ensure professionalism that prioritizes society's interest over commercial interest. As the two media regulators go about their routine task of monitoring the operations of the media, they receive coverage from various media outlets (newspapers, radio, television and online). Some people believe that the kind of approach the regulators adopt depends on the nature of coverage the media give them in their reportage. This situation demands an investigation into how the media, regulated by these regulatory bodies, are representing the regulators in the public's eye and the issues arising from such coverage. Extant literature indicates that studies on media framing have centered on politics, environmental issues, public health issues, conflict and wars, etc. However, there appear to be no studies on media framing of media regulators, especially in the Ghanaian context. Since online newspapers have assumed more mainstream positions in the Ghanaian media and have attracted more audiences in recent times, this study investigates the nature of coverage given to media regulators by four purposively sampled online newspapers in Ghana. 96 news articles are extracted from the websites of the Daily Graphic, Ghanaian Times, Daily Guide and Chronicle newspapers within a five-year period to identify the prominence given to stories about the two media regulators and the frames used to narrate stories about them. Data collected are thematically analyzed through the lens of agenda-setting and media-framing theories. The findings of the study revealed that the two regulators were not given much coverage by way of frequency; however, much prominence was given to them in terms of enhancements such as images. The study further disclosed that most of the news articles framed the regulators as weak and incompetent, which is likely to affect how the public also views the regulators. The study concludes that since frames around the supportive nature of the regulators to issues of the media were not hammered by the online newspapers, the public will not perceive the regulators as playing their roles effectively. Thus, a need for more positive frames to be used to narrate stories about the National Media Commission and the National Communication Authority to promote a cordial relationship between the two institutions and a good image to the public.

Keywords: agenda setting, media framing, media regulators, online newspapers

Procedia PDF Downloads 35
242 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 104
241 Discrete Element Simulations of Composite Ceramic Powders

Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat

Abstract:

Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.

Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography

Procedia PDF Downloads 114
240 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 145
239 Quality Care from the Perception of the Patient in Ambulatory Cancer Services: A Qualitative Study

Authors: Herlin Vallejo, Jhon Osorio

Abstract:

Quality is a concept that has gained importance in different scenarios over time, especially in the area of health. The nursing staff is one of the actors that contributes most to the care process and the satisfaction of the users in the evaluation of quality. However, until now, there are few tools to measure the quality of care in specialized performance scenarios. Patients receiving ambulatory cancer treatments can face various problems, which can increase their level of distress, so improving the quality of outpatient care for cancer patients should be a priority for oncology nursing. The experience of the patient in relation to the care in these services has been little investigated. The purpose of this study was to understand the perception that patients have about quality care in outpatient chemotherapy services. A qualitative, exploratory, descriptive study was carried out in 9 patients older than 18 years, diagnosed with cancer, who were treated at the Institute of Cancerology, in outpatient chemotherapy rooms, with a minimum of three months of treatment with curative intention and which had given your informed consent. The total of participants was determined by the theoretical saturation, and the selection of these was for convenience. Unstructured interviews were conducted, recorded and transcribed. The analysis of the information was done under the technique of content analysis. Three categories emerged that reflect the perception that patients have regarding quality care: patient-centered care, care with love and effects of care. Patients highlighted situations that show that care is centered on them, incorporating elements of patient-centered care from the institutional, infrastructure, qualities of care and what for them, in contrast, means inappropriate care. Care with love as a perception of quality care means for patients that the nursing staff must have certain qualities, perceive caring with love as a family affair, limits on care with love and the nurse-patient relationship. Quality care has effects on both the patient and the nursing staff. One of the most relevant effects was the confidence that the patient develops towards the nurse, besides to transform the unreal images about cancer treatment with chemotherapy. On the other hand, care with quality generates a commitment to self-care and is a facilitator in the transit of oncological disease and chemotherapeutic treatment, but from the perception of a healing transit. It is concluded that care with quality from the perception of patients, is a construction that goes beyond the structural issues and is related to an institutional culture of quality that is reflected in the attitude of the nursing staff and in the acts of Care that have positive effects on the experience of chemotherapy and disease. With the results, it contributes to better understand how quality care is built from the perception of patients and to open a range of possibilities for the future development of an individualized instrument that allows evaluating the quality of care from the perception of patients with cancer.

Keywords: nursing care, oncology service hospital, quality management, qualitative studies

Procedia PDF Downloads 113
238 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging

Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.

Keywords: breast, machine learning, MRI, radiomics

Procedia PDF Downloads 246
237 Radiographic Evaluation of Odontogenic Keratocyst: A 14 Years Retrospective Study

Authors: Nor Hidayah Reduwan, Jira Chindasombatjaroen, Suchaya Pornprasersuk-Damrongsri, Sopee Pomsawat

Abstract:

INTRODUCTION: Odontogenic keratocyst (OKC) remain as a controversial pathologic entity under the scrutiny of many researchers and maxillofacial surgeons alike. The high recurrence rate and relatively aggressive nature of this lesion demand a meticulous analysis of the radiographic characteristic of OKC leading to the formulation of an accurate diagnosis. OBJECTIVE: This study aims to determine the radiographic characteristic of odontogenic keratocyst (OKC) using conventional radiographs and cone beam computed tomography (CBCT) images. MATERIALS AND METHODS: Patients histopathologically diagnosed as OKC from 2003 to 2016 by Oral and Maxillofacial Pathology Department were retrospectively reviewed. Radiographs of these cases from the archives of the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry Mahidol University were retrieved. Assessment of the location, shape, border, cortication, locularity, the relationship of lesion to embedded tooth, displacement of adjacent tooth, root resorption and bony expansion of the lesion were conducted. RESULTS: Radiographs of 91 patients (44 males, 47 females) with the mean age of 31 years old (10 to 84 years) were analyzed. Among all patients, 5 cases were syndromic patients. Hence, a total of 103 OKCs were studied. The most common location was at the ramus of mandible (32%) followed by posterior maxilla (29%). Most cases presented as a well-defined unilocular radiolucency with smooth and corticated border. The lesion was in associated with embedded tooth in 48 lesions (47%). Eighty five percent of embedded tooth are impacted 3rd molar. Thirty-seven percentage of embedded tooth were entirely encapsulated in the lesion. The lesion attached to the embedded tooth at the cementoenamel junction (CEJ) in 40% and extended to part of root in 23% of cases. Teeth displacement and root resorption were found in 29% and 6% of cases, respectively. Bony expansion in bucco-lingual dimension was seen in 63% of cases. CONCLUSION: OKCs were predominant in the posterior region of the mandible with radiographic features of a well-defined, unilocular radiolucency with smooth and corticated margin. The lesions might relate to an embedded tooth by surrounding an entire tooth, attached to the CEJ level or extending to part of root. Bony expansion could be found but teeth displacement and root resorption were not common. These features might help in giving the differential diagnosis.

Keywords: cone beam computed tomography, imaging dentistry, odontogenic keratocyst, radiographic features

Procedia PDF Downloads 107
236 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 126
235 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 102
234 Optical Assessment of Marginal Sealing Performance around Restorations Using Swept-Source Optical Coherence Tomography

Authors: Rima Zakzouk, Yasushi Shimada, Yasunori Sumi, Junji Tagami

Abstract:

Background and purpose: The resin composite has become the main material for the restorations of caries in recent years due to aesthetic characteristics, especially with the development of the adhesive techniques. The quality of adhesion to tooth structures is depending on an exchange process between inorganic tooth material and synthetic resin and a micromechanical retention promoted by resin infiltration in partially demineralized dentin. Optical coherence tomography (OCT) is a noninvasive diagnostic method for obtaining cross-sectional images that produce high-resolution of the biological tissue at the micron scale. The aim of this study was to evaluate the gap formation at adhesive/tooth interface of two-step self-etch adhesives that are preceded with or without phosphoric acid pre-etching in different regions of teeth using SS-OCT. Materials and methods: Round tapered cavities (2×2 mm) were prepared in cervical part of bovine incisors teeth and divided into 2 groups (n=10): first group self-etch adhesive (Clearfil SE Bond) was applied for SE group and second group treated with acid etching before applying the self-etch adhesive for PA group. Subsequently, both groups were restored with Estelite Flow Quick Flowable Composite Resin and observed under OCT. Following 5000 thermal cycles, the same section was obtained again for each cavity using OCT at 1310-nm wavelength. Scanning was repeated after two months to monitor the gap progress. Then the gap length was measured using image analysis software, and the statistics analysis were done between both groups using SPSS software. After that, the cavities were sectioned and observed under Confocal Laser Scanning Microscope (CLSM) to confirm the result of OCT. Results: Gaps formed at the bottom of the cavity was longer than the gap formed at the margin and dento-enamel junction in both groups. On the other hand, pre-etching treatment led to damage the DEJ regions creating longer gap. After 2 months the results showed almost progress in the gap length significantly at the bottom regions in both groups. In conclusions, phosphoric acid etching treatment did not reduce the gap lrngth in most regions of the cavity. Significance: The bottom region of tooth was more exposed to gap formation than margin and DEJ regions, The DEJ damaged with phosphoric acid treatment.

Keywords: optical coherence tomography, self-etch adhesives, bottom, dento enamel junction

Procedia PDF Downloads 193
233 Overcoming Mistrusted Masculinity: Analyzing Muslim Men and Their Aspirations for Fatherhood in Denmark

Authors: Anne Hovgaard Jorgensen

Abstract:

This study investigates how Muslim fathers in Denmark are struggling to overcome notions of mistrust from teachers and educators. Starting from school-home-cooperation (parent conferences, school-home communication, etc.), the study finds that many Muslim fathers do not feel acknowledged as a resource in the upbringing of their children. To explain these experiences further, the study suggest the notion of ‘mistrusted masculinity’ to grasp the controlling image these fathers meet in various schools and child-care-institutions in the Danish Welfare state. The paper is based on 9 months of fieldwork in a Danish school, a social housing area and in various ‘father groups’ in Denmark. Additional, 50 interviews were conducted with fathers, children, mothers, schoolteachers, and educators. By using Connell's concepts 'hegemonic' and 'marginalized' masculinity as steppingstones, the paper argues that these concepts might entail a too static and dualistic picture of gender. By applying the concepts of 'emergent masculinity' and 'emergent fatherhood' the paper brings along a long needed discussion of how Muslim men in Denmark are struggling to overcome and change the controlling images of them as patriarchal and/or ignorant fathers regarding the upbringing of their children. As such, the paper shows how Muslim fathers are taking action to change this controlling image, e.g. through various ‘father groups’. The paper is inspired by the phenomenological notions of ‘experience´ and in the light of this notion, the paper tells the fathers’ stories about their upbringing of their children and aspirations for fatherhood. These stories share light on how these fathers take care of their children in everyday life. The study also shows that the controlling image of these fathers have affected how some Muslim fathers are actually being fathers. The study shows that fear of family-interventions from teachers or social workers e.g. have left some Muslim fathers in a limbo, being afraid of scolding their children, and being confused of ‘what good parenting in Denmark is’. This seems to have led to a more lassie fair upbringing than these fathers actually wanted. This study is important since anthropologists generally have underexposed the notion of fatherhood, and how fathers engage in the upbringing of their children. Over more, the vast majority of qualitative studies of fatherhood have been on white middleclass fathers, living in nuclear families. In addition, this study is crucial at this very moment due to the major refugee crisis in Denmark and in the Western world in general. A crisis, which has resulted in a vast number of scare campaigns against Islam from different nationalistic political parties, which enforces the negative controlling image of Muslim fathers.

Keywords: fatherhood, Muslim fathers, mistrust, education

Procedia PDF Downloads 163
232 Automatic Identification of Aquatic Insects Based on Deep Learning and Computer Vision

Authors: Predrag Simović, Katarina Stojanović, Milena Radenković, Dimitrija Savić Zdravković, Aleksandar Milosavljević, Bratislav Predić, Milenka Božanić, Ana Petrović, Djuradj Milošević

Abstract:

Mayflies (Ephemeroptera), stoneflies (Plecoptera), and caddisflies (Trichoptera) (collectively referred to as EPT) are key participants in most freshwater habitats and often exhibit high diversity. Moreover, their presence and relative abundance are used in freshwater ecological and biomonitoring studies. Current methods for freshwater ecosystem biomonitoring follow a traditional approach of taxa monitoring based on morphological characters, which is time-consuming, and often generates data sets with low taxonomic resolution and unverifiable identification precision. To assist in solving identification problems and contribute to the knowledge of the distribution of many species, there was a need to develop alternative approaches in macroinvertebrate sample identification. Here, we establish an automatic machine-based identification approach for EPT taxa (Insect) using deep Convolutional Neural Networks (CNNs) and computer vision to increase the efficiency and taxonomic resolution in biomonitoring. The 5 550 specimens were collected from freshwater ecosystems of Serbia, and the deep model was built upon 90 EPT taxa. The protocol for obtaining images included the following stages: taxonomic identification by human experts and DNA barcoding validation, mounting the larvae, and photographing the dorsal side using a stereomicroscope and camera (16 650 individuals). The most informative image regions (the dorsal segments of individuals) for the decision-making process in the deep learning model were visualized using Gradient Weighted Class Activation Mapping (Grad-CAM). After training the artificial neural network, a CNN model was then built that was able to classify the 90 EPT taxa into their respective taxonomic categories automatically with 98.7%. Our approach offers a straightforward and efficient solution for routine monitoring programs, focusing on key biotic descriptors, such as EPT taxa. In addition, this application provides a streamlined solution that not only saves time, reduces equipment and expert requirements but also significantly enhances reliability and information content. The identification of the EPT larvae is difficult because of the variation of morphological features even within a single genus or the close resemblance of several species, and therefore, future research should focus on increasing the number of entities (species) in the model.

Keywords: convolutional neural networks, DNA barcoding, EPT taxa, biomonitoring

Procedia PDF Downloads 50
231 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 210
230 Holographic Art as an Approach to Enhance Visual Communication in Egyptian Community: Experimental Study

Authors: Diaa Ahmed Mohamed Ahmedien

Abstract:

Nowadays, it cannot be denied that the most important interactive arts trends have appeared as a result of significant scientific mutations in the modern sciences, and holographic art is not an exception, where it is considered as a one of the most important major contemporary interactive arts trends in visual arts. Holographic technique had been evoked through the modern physics application in late 1940s, for the improvement of the quality of electron microscope images by Denis Gabor, until it had arrived to Margaret Benyon’s art exhibitions, and then it passed through a lot of procedures to enhance its quality and artistic applications technically and visually more over 70 years in visual arts. As a modest extension to these great efforts, this research aimed to invoke extraordinary attempt to enroll sample of normal people in Egyptian community in holographic recording program to record their appreciated objects or antiques, therefore examine their abilities to interact with modern techniques in visual communication arts. So this research tried to answer to main three questions: 'can we use the analog holographic techniques to unleash new theoretical and practical knowledge in interactive arts for public in Egyptian community?', 'to what extent holographic art can be familiar with public and make them able to produce interactive artistic samples?', 'are there possibilities to build holographic interactive program for normal people which lead them to enhance their understanding to visual communication in public and, be aware of interactive arts trends?' This research was depending in its first part on experimental methods, where it conducted in Laser lab at Cairo University, using Nd: Yag Laser 532 nm, and holographic optical layout, with selected samples of Egyptian people that they have been asked to record their appreciated object, after they had already learned recording methods, and in its second part on a lot of discussion panel had conducted to discuss the result and how participants felt towards their holographic artistic products through survey, questionnaires, take notes and critiquing holographic artworks. Our practical experiments and final discussions have already lead us to say that this experimental research was able to make most of participants pass through paradigm shift in their visual and conceptual experiences towards more interaction with contemporary visual arts trends, as an attempt to emphasize to the role of mature relationship between the art, science and technology, to spread interactive arts out in our community through the latest scientific and artistic mutations around the world and the role of this relationship in our societies particularly with those who have never been enrolled in practical arts programs before.

Keywords: Egyptian community, holographic art, laser art, visual art

Procedia PDF Downloads 453
229 The Functions of Spatial Structure in Supporting Socialization in Urban Parks

Authors: Navid Nasrolah Mazandarani, Faezeh Mohammadi Tahrodi, Jr., Norshida Ujang, Richard Jan Pech

Abstract:

Human evolution has designed us to be dependent on social and natural settings, but designed of our modern cities often ignore this fact. It is evident that high-rise buildings dominate most metropolitan city centers. As a result urban parks are very limited and in many cases are not socially responsive to our social needs in these urban ‘jungles’. This paper emphasizes the functions of urban morphology in supporting socialization in Lake Garden, one of the main urban parks in Kuala Lumpur, Malaysia. It discusses two relevant theories; first the concept of users’ experience coined by Kevin Lynch (1960) which states that way-finding is related to the process of forming mental maps of environmental surroundings. Second, the concept of social activity coined by Jan Gehl (1987) which holds that urban public spaces can be more attractive when they provide welcoming places in which people can walk around and spend time. Until recently, research on socio-spatial behavior mainly focused on social ties, place attachment and human well-being; with less focus on the spatial dimension of social behavior. This paper examines the socio-spatial behavior within the spatial structure of the urban park by exploring the relationship between way-finding and social activity. The urban structures defined by the paths and nodes were analyzed as the fundamental topological structure of space to understand their effects on the social engagement pattern. The study uses a photo questionnaire survey to inspect the spatial dimension in relation to the social activities within paths and nodes. To understand the legibility of the park, spatial cognition was evaluated using sketch maps produced by 30 participants who visited the park. The results of the sketch mapping indicated that a spatial image has a strong interrelation with socio-spatial behavior. Moreover, an integrated spatial structure of the park generated integrated use and social activity. It was found that people recognized and remembered the spaces where they engaged in social activities. They could experience the park more thoroughly, when they found their way continuously through an integrated park structure. Therefore, the benefits of both perceptual and social dimensions of planning and design happened simultaneously. The findings can assist urban planners and designers to redevelop urban parks by considering the social quality design that contributes to clear mental images of these places.

Keywords: spatial structure, social activities, sketch map, urban park, way-finding

Procedia PDF Downloads 271
228 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 165
227 Informational Habits and Ideology as Predictors for Political Efficacy: A Survey Study of the Brazilian Political Context

Authors: Pedro Cardoso Alves, Ana Lucia Galinkin, José Carlos Ribeiro

Abstract:

Political participation, can be a somewhat tricky subject to define, not in small part due to the constant changes in the concept fruit of the effort to include new forms of participatory behavior that go beyond traditional institutional channels. With the advent of the internet and mobile technologies, defining political participation has become an even more complicated endeavor, given de amplitude of politicized behaviors that are expressed throughout these mediums, be it in the very organization of social movements, in the propagation of politicized texts, videos and images, or in the micropolitical behaviors that are expressed in daily interaction. In fact, the very frontiers that delimit physical and digital spaces have become ever more diluted due to technological advancements, leading to a hybrid existence that is simultaneously physical and digital, not limited, as it once was, to the temporal limitations of classic communications. Moving away from those institutionalized actions of traditional political behavior, an idea of constant and fluid participation, which occurs in our daily lives through conversations, posts, tweets and other digital forms of expression, is discussed. This discussion focuses on the factors that precede more direct forms of political participation, interpreting the relation between informational habits, ideology, and political efficacy. Though some of the informational habits can be considered political participation, by some authors, a distinction is made to establish a logical flow of behaviors leading to participation, that is, one must gather and process information before acting on it. To reach this objective, a quantitative survey is currently being applied in Brazilian social media, evaluating feelings of political efficacy, social and economic issue-based ideological stances and informational habits pertaining to collection, fact-checking, and diversity of sources and ideological positions present in the participant’s political information network. The measure being used for informational habits relies strongly on a mix of information literacy and political sophistication concepts, bringing a more up-to-date understanding of information and knowledge production and processing in contemporary hybrid (physical-digital) environments. Though data is still being collected, preliminary analysis point towards a strong correlation between information habits and political efficacy, while ideology shows a weaker influence over efficacy. Moreover, social ideology and economic ideology seem to have a strong correlation in the sample, such intermingling between social and economic ideals is generally considered a red flag for political polarization.

Keywords: political efficacy, ideology, information literacy, cyberpolitics

Procedia PDF Downloads 212
226 Biopolymers: A Solution for Replacing Polyethylene in Food Packaging

Authors: Sonia Amariei, Ionut Avramia, Florin Ursachi, Ancuta Chetrariu, Ancuta Petraru

Abstract:

The food industry is one of the major generators of plastic waste derived from conventional synthetic petroleum-based polymers, which are non-biodegradable, used especially for packaging. These packaging materials, after the food is consumed, accumulate serious environmental concerns due to the materials but also to the organic residues that adhere to them. It is the concern of specialists, researchers to eliminate problems related to conventional materials that are not biodegradable or unnecessary plastic and replace them with biodegradable and edible materials, supporting the common effort to protect the environment. Even though environmental and health concerns will cause more consumers to switch to a plant-based diet, most people will continue to add more meat to their diet. The paper presents the possibility of replacing the polyethylene packaging from the surface of the trays for meat preparations with biodegradable packaging obtained from biopolymers. During the storage of meat products may occur deterioration by lipids oxidation and microbial spoilage, as well as the modification of the organoleptic characteristics. For this reason, different compositions of polymer mixtures and film conditions for obtaining must be studied to choose the best packaging material to achieve food safety. The compositions proposed for packaging are obtained from alginate, agar, starch, and glycerol as plasticizers. The tensile strength, elasticity, modulus of elasticity, thickness, density, microscopic images of the samples, roughness, opacity, humidity, water activity, the amount of water transferred as well as the speed of water transfer through these packaging materials were analyzed. A number of 28 samples with various compositions were analyzed, and the results showed that the sample with the highest values for hardness, density, and opacity, as well as the smallest water vapor permeability, of 1.2903E-4 ± 4.79E-6, has the ratio of components as alginate: agar: glycerol (3:1.25:0.75). The water activity of the analyzed films varied between 0.2886 and 0.3428 (aw< 0.6), demonstrating that all the compositions ensure the preservation of the products in the absence of microorganisms. All the determined parameters allow the appreciation of the quality of the packaging films in terms of mechanical resistance, its protection against the influence of light, the transfer of water through the packaging. Acknowledgments: This work was supported by a grant of the Ministry of Research, Innovation, and Digitization, CNCS/CCCDI – UEFISCDI, project number PN-III-P2-2.1-PED-2019-3863, within PNCDI III.

Keywords: meat products, alginate, agar, starch, glycerol

Procedia PDF Downloads 141
225 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations

Authors: Priyanka Bharti

Abstract:

Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.

Keywords: cognition, visual, decision making, graphics, recognition

Procedia PDF Downloads 242