Search results for: perceived image
2869 Measuring Corporate Brand Loyalties in Business Markets: A Case for Caution
Authors: Niklas Bondesson
Abstract:
Purpose: This paper attempts to examine how different facets of attitudinal brand loyalty are determined by different brand image elements in business markets. Design/Methodology/Approach: Statistical analysis is employed to data from a web survey, covering 226 professional packaging buyers in eight countries. Findings: The results reveal that different brand loyalty facets have different antecedents. Affective brand loyalties (or loyalty 'feelings') are mainly driven by customer associations to service relationships, whereas customers’ loyalty intentions (to purchase and recommend a brand) are triggered by associations to the general reputation of the company. The findings also indicate that willingness to pay a price premium is a distinct form of loyalty, with unique determinants. Research implications: Theoretically, the paper suggests that corporate B2B brand loyalty needs to be conceptualised with more refinement than has been done in extant B2B branding work. Methodologically, the paper highlights that single-item approaches can be fruitful when measuring B2B brand loyalty, and that multi-item scales can conceal important nuances in terms of understanding why customers are loyal. Practical implications: The idea of a loyalty 'silver metric' is an attractive idea, but this study indicates that firms who rely too much on one single type of brand loyalty risk to miss important building blocks. Originality/Value/Contribution: The major contribution is a more multi-faceted conceptualisation, and measurement, of corporate B2B brand loyalty and its brand image determinants than extant work has provided.Keywords: brand equity, business-to-business branding, industrial marketing, buying behaviour
Procedia PDF Downloads 4142868 Change through Stillness: Mindfulness Meditation as an Intervention for Men with Self-Perceived Problematic Pornography Use
Authors: Luke Sniewski, Pante Farvid, Phil Carter, Rita Csako
Abstract:
Background and Aims: Self-Perceived Problematic Porn Use (SPPPU) refers to individuals who identify as or perceive themselves to be addicted to porn. These individuals feel they are unable to regulate their porn consumption and experience adverse consequences as a result of their use in everyday life. To the author’s best knowledge, this research represents the first study to intervene with pornography use with mindfulness meditation, and aims to investigate the experiences and challenges of men with SPPPU as they engage in a mindfulness meditation intervention. As meditation is commonly characterized by sitting and observing one’s internal experience with non-reaction and acceptance, the study’s principal hypothesis was that consistent practice of meditation would develop the participant’s capacity to respond to cravings, urges, and unwanted thoughts in less reactive, more productive ways. Method: This 12-mixed method research utilised Single Case Experimental Design (SCED) methodology, with a standard AB design. Each participant was randomly assigned to an initial baseline time period between 2 to 5 weeks before learning the meditation technique and practicing it for the remainder of the 12-week study. The pilot study included 3 participants, while the intervention study included 12. The meditation technique used for the study involved a 15-minute guided breathing exercise in the morning, along with a 15-minute guided concentration meditation in the evening. Results: At the time of submission, only pilot study results were available. Results from the pilot study indicate an improved capacity for self-awareness of the uncomfortable mental and emotional states that drove their participants’ pornography use. Statistically significant reductions were also observed in daily porn use, total weekly time spent viewing porn, as well as lowered Pornography Craving Questionnaire (PCQ) and Problematic Pornography Use Scale (PPUS) scores. Conclusion: Pilot study results suggest that meditation could serve as a complementary tool for health professionals to provide clients in conjunction with therapeutic interventions. Study limitations, directions for future research, and clinical implications to be discussed as well.Keywords: meditation, behavioural change, pornography, mindfulness
Procedia PDF Downloads 1502867 A Process of Forming a Single Competitive Factor in the Digital Camera Industry
Authors: Kiyohiro Yamazaki
Abstract:
This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors
Procedia PDF Downloads 1582866 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination
Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini
Abstract:
This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.Keywords: impersonation, image registration, incrimination, object detection, threshold evaluation
Procedia PDF Downloads 2302865 Near-Infrared Hyperspectral Imaging Spectroscopy to Detect Microplastics and Pieces of Plastic in Almond Flour
Authors: H. Apaza, L. Chévez, H. Loro
Abstract:
Plastic and microplastic pollution in human food chain is a big problem for human health that requires more elaborated techniques that can identify their presences in different kinds of food. Hyperspectral imaging technique is an optical technique than can detect the presence of different elements in an image and can be used to detect plastics and microplastics in a scene. To do this statistical techniques are required that need to be evaluated and compared in order to find the more efficient ones. In this work, two problems related to the presence of plastics are addressed, the first is to detect and identify pieces of plastic immersed in almond seeds, and the second problem is to detect and quantify microplastic in almond flour. To do this we make use of the analysis hyperspectral images taken in the range of 900 to 1700 nm using 4 unmixing techniques of hyperspectral imaging which are: least squares unmixing (LSU), non-negatively constrained least squares unmixing (NCLSU), fully constrained least squares unmixing (FCLSU), and scaled constrained least squares unmixing (SCLSU). NCLSU, FCLSU, SCLSU techniques manage to find the region where the plastic is found and also manage to quantify the amount of microplastic contained in the almond flour. The SCLSU technique estimated a 13.03% abundance of microplastics and 86.97% of almond flour compared to 16.66% of microplastics and 83.33% abundance of almond flour prepared for the experiment. Results show the feasibility of applying near-infrared hyperspectral image analysis for the detection of plastic contaminants in food.Keywords: food, plastic, microplastic, NIR hyperspectral imaging, unmixing
Procedia PDF Downloads 1302864 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 102863 Two Fold Dimensional Analysis of Post-Employment Dissonance in Employer Branding Framework of it SMES
Authors: J. Janani, S. Gomathi
Abstract:
Despite the new economy is embodied with the ample size of talent pool, the corporate world is facing the hardship in the mismatch of talent demand supply. Therefore to combat with this fallout crisis, here depicts the relevance of Employer Branding. Employer branding is gaining its popularity in Large sized companies especially IT companies but less employer branding awareness among IT SMEs (Small and Medium size Enterprises). There are N range of analysis has been dole out on employer branding from different perspectives and in different industries. The hidden factor behind the employer branding namely the post employment dissonance was not given a lot of importance into the research picture. The present study examines the employer branding as the employer image and the organizational identity. It focuses on the two fold dimensional branding initiatives namely job offer attributes and organizational attractiveness. The study will depict the dissonance level and their variations among the foresaid initiatives from the former employees and the post-employment dissonance from the present employees in IT SMEs and it will also examine the employer perception from the prospective employees towards the stated branding initiatives. The demographic factors such as generational factors (gen X and gen Y) and the career stages are majorly focused in the study. The study will promote the IT SMEs to strengthen their employer branding effectively and efficiently through implementing varied strategies and this will help them to enhance the talent pool at their best. This will eventually result in talent attraction and talent retention.Keywords: employer image, organizational identity, post-employment dissonance, job offer attributes, organizational attractiveness, talent pool, career stages, generational factors, information technology, SMEs
Procedia PDF Downloads 4962862 A Comparative Study on Deep Learning Models for Pneumonia Detection
Authors: Hichem Sassi
Abstract:
Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.Keywords: deep learning, computer vision, pneumonia, models, comparative study
Procedia PDF Downloads 642861 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices
Authors: Zhuang Yiwen
Abstract:
The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms
Procedia PDF Downloads 772860 Predicting Halal Food Consumption for Muslim Turkish Immigrants Living in Germany
Authors: Elif Eroglu Hall, Nurdan Sevim
Abstract:
The purposes of this research are to clarify the determinants of Muslim immigrants in consuming halal food by using components of Theory of Planned Behavior. The study was done by surveying Turkish immigrants living in Cologne Germany. The results of this study show that the intentions of Muslim Turkish immigrants in consuming halal food is influenced by attitude, subjective norms and perceived behavioral control.Keywords: halal food, immigrants, religion, theory of planned behavior
Procedia PDF Downloads 2952859 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production
Authors: Divya Vats, Sanjay Mahajani
Abstract:
The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity
Procedia PDF Downloads 712858 Substitutional Inference in Poetry: Word Choice Substitutions Craft Multiple Meanings by Inference
Authors: J. Marie Hicks
Abstract:
The art of the poetic conjoins meaning and symbolism with imagery and rhythm. Perhaps the reader might read this opening sentence as 'The art of the poetic combines meaning and symbolism with imagery and rhythm,' which holds a similar message, but is not quite the same. The reader understands that these factors are combined in this literary form, but to gain a sense of the conjoining of these factors, the reader is forced to consider that these aspects of poetry are not simply combined, but actually adjoin, abut, skirt, or touch in the poetic form. This alternative word choice is an example of substitutional inference. Poetry is, ostensibly, a literary form where language is used precisely or creatively to evoke specific images or emotions for the reader. Often, the reader can predict a coming rhyme or descriptive word choice in a poem, based on previous rhyming pattern or earlier imagery in the poem. However, there are instances when the poet uses an unexpected word choice to create multiple meanings and connections. In these cases, the reader is presented with an unusual phrase or image, requiring that they think about what that image is meant to suggest, and their mind also suggests the word they expected, creating a second, overlying image or meaning. This is what is meant by the term 'substitutional inference.' This is different than simply using a double entendre, a word or phrase that has two meanings, often one complementary and the other disparaging, or one that is innocuous and the other suggestive. In substitutional inference, the poet utilizes an unanticipated word that is either visually or phonetically similar to the expected word, provoking the reader to work to understand the poetic phrase as written, while unconsciously incorporating the meaning of the line as anticipated. In other words, by virtue of a word substitution, an inference of the logical word choice is imparted to the reader, while they are seeking to rationalize the word that was actually used. There is a substitutional inference of meaning created by the alternate word choice. For example, Louise Bogan, 4th Poet Laureate of the United States, used substitutional inference in the form of homonyms, malapropisms, and other unusual word choices in a number of her poems, lending depth and greater complexity, while actively engaging her readers intellectually with her poetry. Substitutional inference not only adds complexity to the potential interpretations of Bogan’s poetry, as well as the poetry of others, but provided a method for writers to infuse additional meanings into their work, thus expressing more information in a compact format. Additionally, this nuancing enriches the poetic experience for the reader, who can enjoy the poem superficially as written, or on a deeper level exploring gradations of meaning.Keywords: poetic inference, poetic word play, substitutional inference, word substitution
Procedia PDF Downloads 2382857 Moderating and Mediating Effects of Business Model Innovation Barriers during Crises: A Structural Equation Model Tested on German Chemical Start-Ups
Authors: Sarah Mueller-Saegebrecht, André Brendler
Abstract:
Business model innovation (BMI) as an intentional change of an existing business model (BM) or the design of a new BM is essential to a firm's development in dynamic markets. The relevance of BMI is also evident in the ongoing COVID-19 pandemic, in which start-ups, in particular, are affected by limited access to resources. However, first studies also show that they react faster to the pandemic than established firms. A strategy to successfully handle such threatening dynamic changes represents BMI. Entrepreneurship literature shows how and when firms should utilize BMI in times of crisis and which barriers one can expect during the BMI process. Nevertheless, research merging BMI barriers and crises is still underexplored. Specifically, further knowledge about antecedents and the effect of moderators on the BMI process is necessary for advancing BMI research. The addressed research gap of this study is two-folded: First, foundations to the subject on how different crises impact BM change intention exist, yet their analysis lacks the inclusion of barriers. Especially, entrepreneurship literature lacks knowledge about the individual perception of BMI barriers, which is essential to predict managerial reactions. Moreover, internal BMI barriers have been the focal point of current research, while external BMI barriers remain virtually understudied. Second, to date, BMI research is based on qualitative methodologies. Thus, a lack of quantitative work can specify and confirm these qualitative findings. By focusing on the crisis context, this study contributes to BMI literature by offering a first quantitative attempt to embed BMI barriers into a structural equation model. It measures managers' perception of BMI development and implementation barriers in the BMI process, asking the following research question: How does a manager's perception of BMI barriers influence BMI development and implementation in times of crisis? Two distinct research streams in economic literature explain how individuals react when perceiving a threat. "Prospect Theory" claims that managers demonstrate risk-seeking tendencies when facing a potential loss, and opposing "Threat-Rigidity Theory" suggests that managers demonstrate risk-averse behavior when facing a potential loss. This study quantitively tests which theory can best predict managers' BM reaction to a perceived crisis. Out of three in-depth interviews in the German chemical industry, 60 past BMIs were identified. The participating start-up managers gave insights into their start-up's strategic and operational functioning. After, each interviewee described crises that had already affected their BM. The participants explained how they conducted BMI to overcome these crises, which development and implementation barriers they faced, and how severe they perceived them, assessed on a 5-point Likert scale. In contrast to current research, results reveal that a higher perceived threat level of a crisis harms BM experimentation. Managers seem to conduct less BMI in times of crisis, whereby BMI development barriers dampen this relation. The structural equation model unveils a mediating role of BMI implementation barriers on the link between the intention to change a BM and the concrete BMI implementation. In conclusion, this study confirms the threat-rigidity theory.Keywords: barrier perception, business model innovation, business model innovation barriers, crises, prospect theory, start-ups, structural equation model, threat-rigidity theory
Procedia PDF Downloads 942856 From the Local to the Global: New Terrorism
Authors: Shamila Ahmed
Abstract:
The paper examines how the fluidity between the local level and the global level is an intrinsic feature of new terrorism. Through using cosmopolitanism, the narratives of the two opposing sides of ISIS and the ‘war on terrorism’ response are explored. It is demonstrated how the fluidity between these levels facilitates the radicalisation process through exploring how groups such as ISIS highlight the perceived injustices against Muslims locally and globally and therefore exploit the globalisation process which has reduced the space between these levels. Similarly, it is argued that the ‘war on terror’ involves the intersection of fear, security, threat, risk and social control as features of both the international ‘war on terror’ and intra state policies.Keywords: terrorism, war on terror, cosmopolitanism, global level terrorism
Procedia PDF Downloads 5852855 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques
Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri
Abstract:
Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology
Procedia PDF Downloads 1552854 Exploratory Case Study: Judicial Discretion and Political Statements Transforming the Actions of the Commissioner for the South African Revenue Service
Authors: Werner Roux Uys
Abstract:
The Commissioner for the South African Revenue Service (SARS) holds a high position of trust in South African society and a lack of trust by taxpayers in the Commissioner’s actions or conduct could compromise SARS’ management of public finances. Tax morality – which is implicit in the social contract between taxpayers and the state – includes distinct phenomena that can cause a breakdown if there is a perceived lack of action on the part of the Commissioner to ensure public finances are kept safe. To promote tax morality, the Commissioner must support the judiciary in the exercise of its discretion to punish fraudulent tax activities and corrupt tax practices. For several years the political meddling in the Commissioner’s actions and conduct have caused perceived abuse of power at SARS, and taxpayers believed their hard-earned income paid over to SARS would be fruitless and wasteful expenditure. The purpose of this article is to identify and analyse previous decisions held by the South African judiciary regarding the Commissioner’s actions and conduct in tax matters, as well as consider important political statements and newspaper bulletins for the purpose of this research. The study applies a qualitative research approach and exploratory case study technique. Keywords were selected and inserted in the LexisNexis electronic database to systematically identify applicable case law where the ratio decidendi of the court referred to the actions and/or conduct of the Commissioner. Specific real-life statements, including political statements and newspaper bulletins, were selected to support the topic at hand. The purpose of the study is to educate the public about the perceptions that have transformed taxpayers’ behaviour towards the Commissioner for SARS since South Africa’s fledgling constitutional democracy was inaugurated in 1994. The study adds to the literature by identifying key characteristics or distinct phenomena regarding the actions and conduct of the Commissioner affecting taxpayers’ behaviour, including discretionary decision-making. From the findings, it emerged that SARS must abide by its (own) laws and that there is a need to educate not only South African taxpayers about tax morality, but also the public in general.Keywords: commissioner, SARS, action and conduct, judiciary, discretionry, decsion-making
Procedia PDF Downloads 682853 Disaggregating Communities and the Making of Factional States: Evidence from Joint Forest Management in Sundarban, India
Authors: Amrita Sen
Abstract:
In the face of a growing insurgent movement and the perceived failure of the state and the market towards sustainable resource management, a range of decentralized forest management policies was formulated in the last two decades, which recognized the need for community representations within the statutory methods of forest management. The recognition conceded on the virtues of ecological sustainability and traditional environmental knowledge, which were considered to be the principal repositories of the forest dependent communities. The present study, in the light of empirical insights, reflects on the contemporary disjunctions between the preconceived communitarian ethic in environmentalism and the lived reality of forest based life-worlds. Many of the popular as well as dominant ideologies, which have historically shaped the conceptual and theoretical understanding of sociology, needs further perusal in the context of the emerging contours of empirical knowledge, which lends opportunities for substantive reworking and analysis. The image of the community appears to be one of those concepts, an identity which has for long defined perspectives and processes associated with people living together harmoniously in small physical spaces. Through an ethnographic account of the implementation of Joint Forest Management (JFM) in a forest fringe village in Sundarban, the study explores the ways in which the idea of ‘community’ gets transformed through the process of state-making, rendering the necessity of its departure from the standard, conventional definition of homogeneity and internal equity. The study necessitates an attention towards the anthropology of micro-politics, disaggregating an essentially constructivist anthropology of ‘collective identities’, which can render the visibility of political mobilizations plausible within the seemingly culturalist production of communities. The two critical questions that the paper seeks to ask in this context are: how the ‘local’ is constituted within community based conservation practices? Within the efforts of collaborative forest management, how accurately does the depiction of ‘indigenous environmental knowledge’, subscribe to its role of sustainable conservation practices? Reflecting on the execution of JFM in Sundarban, the study critically explores the ways in which the state ceases to be ‘trans-national’ and interacts with the rural life-worlds through its local factions. Simultaneously, the study attempts to articulate the scope of constructing a competing representation of community, shaped by increasing political negotiations and bureaucratic alignments which strains against the usual preoccupations with tradition primordiality and non material culture as well as the amorous construction of indigeneity.Keywords: community, environmentalism, JFM, state-making, identities, indigenous
Procedia PDF Downloads 1982852 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit
Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana
Abstract:
Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification
Procedia PDF Downloads 1552851 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection
Procedia PDF Downloads 1352850 Determination of ILSS of Composite Materials Using Micromechanical FEA Analysis
Authors: K. Rana, H.A.Saeed, S. Zahir
Abstract:
Inter Laminar Shear Stress (ILSS) is a main key parameter which quantify the properties of composite materials. These properties can ascertain the use of material for a specific purpose like aerospace, automotive etc. A modelling approach for determination of ILSS is presented in this paper. Geometric modelling of composite material is performed in TEXGEN software where reinforcement, cured matrix and their interfaces are modelled separately as per actual geometry. Mechanical properties of matrix and reinforcements are modelled separately which incorporated anisotropy in the real world composite material. ASTM D2344 is modelled in ANSYS for ILSS. In macroscopic analysis model approximates the anisotropy of the material and uses orthotropic properties by applying homogenization techniques. Shear Stress analysis in that case does not show the actual real world scenario and rather approximates it. In this paper actual geometry and properties of reinforcement and matrix are modelled to capture the actual stress state during the testing of samples as per ASTM standards. Testing of samples is also performed in order to validate the results. Fibre volume fraction of yarn is determined by image analysis of manufactured samples. Fibre volume fraction data is incorporated into the numerical model for correction of transversely isotropic properties of yarn. A comparison between experimental and simulated results is presented.Keywords: ILSS, FEA, micromechanical, fibre volume fraction, image analysis
Procedia PDF Downloads 3732849 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network
Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal
Abstract:
This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography
Procedia PDF Downloads 1432848 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 1612847 The Influence of Culture on Manifestations of Animus
Authors: Anahit Khananyan
Abstract:
The results of the long-term Jungian analysis with female clients from Eastern and Asian countries, which belong to collectivist cultures, are summarised in the article. The goal of the paper is to describe the cultural complex, which was found by the author in the analysis of women of collectivistic culture. It was named “the repression of Animus”. Generally, C.G.Jung himself and the Post-Jungians studied conditions caused by the possession by Animus. The conditions and cases of the repressed Animus, depending on the type of culture and cultural complexes, as we know, were not widely disseminated. C.G. Jung discovered and recognized the Animus as the second component of a pair of opposites of the psyche of women – femininity and Animus. In the way of individuation, an awareness of manifestations of Animus plays an important role: understanding the differences between negative and positive Animus as well as the Animus and the Shadow, then standing the tension of the presence of a pair of opposites - femininity and Animus, acceptance of the tension of them, finding the balance between them and reconciliation of this opposites. All of the above are steps towards the realization of the Animus, its release Animua, and the healing of the psyche. In the paper, the author will share her experience of analyzing the women of different collectivist cultures and her experience of recognizing the repressed Animus during the analysis. Also, she will describe some peculiarities of upbringing and cultural traditions, which reflected the cultural complex of repression of Animus. This complex is manifested in the traditions of girls' upbringing in accordance with which an image of a woman with overly developed femininity and an absence or underdeveloped Animus is idealized and encouraged as well as an evaluating attitude towards females who have to correspond to this image and fulfill the role prescribed in this way in the family and society.Keywords: analysis, cultural complex, animus, manifestation, culture
Procedia PDF Downloads 832846 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose
Procedia PDF Downloads 2212845 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah
Abstract:
Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph
Procedia PDF Downloads 3062844 Soil Salinity from Wastewater Irrigation in Urban Greenery
Authors: H. Nouri, S. Chavoshi Borujeni, S. Anderson, S. Beecham, P. Sutton
Abstract:
The potential risk of salt leaching through wastewater irrigation is of concern for most local governments and city councils. Despite the necessity of salinity monitoring and management in urban greenery, most attention has been on agricultural fields. This study was defined to investigate the capability and feasibility of monitoring and predicting soil salinity using near sensing and remote sensing approaches using EM38 surveys, and high-resolution multispectral image of WorldView3. Veale Gardens within the Adelaide Parklands was selected as the experimental site. The results of the near sensing investigation were validated by testing soil salinity samples in the laboratory. Over 30 band combinations forming salinity indices were tested using image processing techniques. The outcomes of the remote sensing and near sensing approaches were compared to examine whether remotely sensed salinity indicators could map and predict the spatial variation of soil salinity through a potential statistical model. Statistical analysis was undertaken using the Stata 13 statistical package on over 52,000 points. Several regression models were fitted to the data, and the mixed effect modelling was selected the most appropriate one as it takes to account the systematic observation-specific unobserved heterogeneity. Results showed that SAVI (Soil Adjusted Vegetation Index) was the only salinity index that could be considered as a predictor for soil salinity but further investigation is needed. However, near sensing was found as a rapid, practical and realistically accurate approach for salinity mapping of heterogeneous urban vegetation.Keywords: WorldView3, remote sensing, EM38, near sensing, urban green spaces, green smart cities
Procedia PDF Downloads 1622843 A Cross-Sectional Study on Clinical Self-Efficacy of Final Year School of Nursing Students among Universities of Tigray Region, Northern Ethiopia
Authors: Awole Seid, Yosef Zenebe, Hadgu Gerensea, Kebede Haile Misgina
Abstract:
Background: Clinical competence is one of the ultimate goals of nursing education. Clinical skills are more than successfully performing tasks; it incorporates client assessment, identification of deficits and the ability to critically think to provide solutions. Assessment of clinical competence, particularly identifying gaps that need improvement and determining the educational needs of nursing students have great importance in nursing education. Thus this study aims determining clinical self-efficacy of final year school of nursing students in three universities of Tigray Region. Methods: A cross-sectional study was conducted on 224 final year school of nursing students from department of nursing, psychiatric nursing, and midwifery on three universities of Tigray region. Anonymous self-administered questionnaire was administered to generate data collected on June, 2017. The data were analyzed using SPSS version 20. The result is described using tables and charts as required. Logistic regression was employed to test associations. Result: The mean age of students was 22.94 + 1.44. Generally, 21% of students have been graduated in the department in which they are not interested. The study demonstrated 28.6% had poor and 71.4% had good perceived clinical self-efficacy. Beside this, 43.8% of psychiatric nursing and 32.6% of comprehensive nursing students have poor clinical self-efficacy. Among the four domains, 39.3% and 37.9% have poor clinical self- efficacy with regard to ‘Professional development’ and ‘Management of care’. Place of the institution [AOR=3.480 (1.333 - 9.088), p=0.011], interest during department selection [AOR=2.202 (1.045 - 4.642), p=.038], and theory-practice gap [AOR=0.224 (0.110 - 0.457), p=0.000] were significantly associated with perceived clinical self-efficacy. Conclusion: The magnitude of students with poor clinically self efficacy was high. Place of institution, theory-practice gap, students interest to the discipline were the significant predictors of clinical self-efficacy. Students from youngest universities have good clinical self-efficacy. During department selection, student’s interest should be respected. The universities and other stakeholders should improve the capacity of surrounding affiliate teaching hospitals to set and improve care standards in order to narrow the theory-practice gap. School faculties should provide trainings to hospital staffs and monitor standards of clinical procedures.Keywords: clinical self-efficacy, nursing students, Tigray, northern Ethiopia
Procedia PDF Downloads 1712842 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 3502841 American Slang: Perception and Connotations – Issues of Translation
Authors: Lison Carlier
Abstract:
The English language that is taught in school or used in media nowadays is defined as 'standard English,' although unstandardized Englishes, or 'parallel' Englishes, are practiced throughout the world. The existence of these 'parallel' Englishes has challenged standardization by imposing its own specific vocabulary or grammar. These non-standard languages tend to be regarded as inferior and, therefore, pose a problem regarding their translation. In the USA, 'slanguage', or slang, is a good example of a 'parallel' language. It consists of a particular set of vocabulary, used mostly in speech, and rarely in writing. Qualified as vulgar, often reduced to an urban language spoken by young people from lower classes, slanguage – or the language that is often first spoken between youths – is still the most common language used in the English-speaking world. Moreover, it appears that the prime meaning of 'informal' (as in an informal language) – a language that is spoken with persons the speaker knows – has been put aside and replaced in the general mind by the idea of vulgarity and non-appropriateness, when in fact informality is a sign of intimacy, not of vulgarity. When it comes to translating American slang, the main problem a translator encounters is the image and the cultural background usually associated with this 'parallel' language. Indeed, one will have, unwillingly, a predisposition to categorize a speaker of a 'parallel' language as being part of a particular group of people. The way one sees a speaker using it is paramount, and needs to be transposed into the target language. This paper will conduct an analysis of American slang – its use, perception and the image it gives of its speakers – and its translation into French, using the novel Is Everyone Hanging Out Without Me? (and other concerns) by way of example. In her autobiography/personal essay book, comedy writer, actress and author Mindy Kaling speaks with a very familiar English, including slang, which participates in the construction of her own voice and style, and enables a deeper connection with her readers.Keywords: translation, English, slang, French
Procedia PDF Downloads 3182840 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 159