Search results for: quantum image encryption
1868 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis
Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal
Abstract:
Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix
Procedia PDF Downloads 951867 Measuring Corporate Brand Loyalties in Business Markets: A Case for Caution
Authors: Niklas Bondesson
Abstract:
Purpose: This paper attempts to examine how different facets of attitudinal brand loyalty are determined by different brand image elements in business markets. Design/Methodology/Approach: Statistical analysis is employed to data from a web survey, covering 226 professional packaging buyers in eight countries. Findings: The results reveal that different brand loyalty facets have different antecedents. Affective brand loyalties (or loyalty 'feelings') are mainly driven by customer associations to service relationships, whereas customers’ loyalty intentions (to purchase and recommend a brand) are triggered by associations to the general reputation of the company. The findings also indicate that willingness to pay a price premium is a distinct form of loyalty, with unique determinants. Research implications: Theoretically, the paper suggests that corporate B2B brand loyalty needs to be conceptualised with more refinement than has been done in extant B2B branding work. Methodologically, the paper highlights that single-item approaches can be fruitful when measuring B2B brand loyalty, and that multi-item scales can conceal important nuances in terms of understanding why customers are loyal. Practical implications: The idea of a loyalty 'silver metric' is an attractive idea, but this study indicates that firms who rely too much on one single type of brand loyalty risk to miss important building blocks. Originality/Value/Contribution: The major contribution is a more multi-faceted conceptualisation, and measurement, of corporate B2B brand loyalty and its brand image determinants than extant work has provided.Keywords: brand equity, business-to-business branding, industrial marketing, buying behaviour
Procedia PDF Downloads 4101866 A Process of Forming a Single Competitive Factor in the Digital Camera Industry
Authors: Kiyohiro Yamazaki
Abstract:
This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors
Procedia PDF Downloads 1561865 Spin Coherent States Without Squeezing
Authors: A. Dehghani, S. Shirin
Abstract:
We propose in this article a new configuration of quantum states, |α, β> := |α>×|β>. Which are composed of vector products of two different copies of spin coherent states, |α> and |β>. Some mathematical as well as physical properties of such states are discussed. For instance, it has been shown that the cross products of two coherent vectors remain coherent again. They admit a resolution of the identity through positive definite measures on the complex plane. They represent packets similar to the true coherent states, in other words we would not expect to take spin squeezing in any of the field quadratures Lˆx, Lˆy and Lˆz. Depending on the particular choice of parameters in the above scenarios, they can be converted into the so-called Dicke states which minimize the uncertainty relations of each pair of the angular momentum components.Keywords: vector (Cross-)products, minimum uncertainty, angular momentum, measurement, Dicke states
Procedia PDF Downloads 4111864 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination
Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini
Abstract:
This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.Keywords: impersonation, image registration, incrimination, object detection, threshold evaluation
Procedia PDF Downloads 2281863 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector
Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar
Abstract:
Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability
Procedia PDF Downloads 1831862 Preparation of Wireless Networks and Security; Challenges in Efficient Accession of Encrypted Data in Healthcare
Authors: M. Zayoud, S. Oueida, S. Ionescu, P. AbiChar
Abstract:
Background: Wireless sensor network is encompassed of diversified tools of information technology, which is widely applied in a range of domains, including military surveillance, weather forecasting, and earthquake forecasting. Strengthened grounds are always developed for wireless sensor networks, which usually emerges security issues during professional application. Thus, essential technological tools are necessary to be assessed for secure aggregation of data. Moreover, such practices have to be incorporated in the healthcare practices that shall be serving in the best of the mutual interest Objective: Aggregation of encrypted data has been assessed through homomorphic stream cipher to assure its effectiveness along with providing the optimum solutions to the field of healthcare. Methods: An experimental design has been incorporated, which utilized newly developed cipher along with CPU-constrained devices. Modular additions have also been employed to evaluate the nature of aggregated data. The processes of homomorphic stream cipher have been highlighted through different sensors and modular additions. Results: Homomorphic stream cipher has been recognized as simple and secure process, which has allowed efficient aggregation of encrypted data. In addition, the application has led its way to the improvisation of the healthcare practices. Statistical values can be easily computed through the aggregation on the basis of selected cipher. Sensed data in accordance with variance, mean, and standard deviation has also been computed through the selected tool. Conclusion: It can be concluded that homomorphic stream cipher can be an ideal tool for appropriate aggregation of data. Alongside, it shall also provide the best solutions to the healthcare sector.Keywords: aggregation, cipher, homomorphic stream, encryption
Procedia PDF Downloads 2581861 A Functional Analysis of a Political Leader in Terms of Marketing
Authors: Aşina Gülerarslan, M. Faik Özdengül
Abstract:
The new economic, social and political world order has led to the emergence of a wide range of persuasion strategies and practices based on an ever expanding marketing axis that involves organizations, ideas and persons as well as products and services. It is seen that since the 1990's, a wide variety of competitive marketing ideas have been offered systematically to target audiences in the field of politics as in other fields. When the components of marketing are taken into consideration, all kinds of communication efforts involving “political leaders”, who are conceptualized as products in terms of political marketing, serve a process of social persuasion, which cannot be restricted to election periods only, and a manageable “image”. In this context, image, which is concerned with how the political product is perceived, involves not only the political discourses shared with the public but also all kinds of biographical information about the leader, the leader’s specific way of living and routines and his/her attitudes and behaviors in their private lives, and all these are regarded as components of the “product image”. While on the one hand the leader’s verbal or supra-verbal references serve the way the “spirit of the product” is perceived –just as in brand positioning- they also show their self-esteem levels, in other words how they perceive themselves on the other hand. Indeed, their self-esteem levels are evaluated in three fundamental categories in the “Functional Analysis”, namely parent, child and adult, and it is revealed that the words, tone of voice and body language a person uses makes it easy to understand at what self-esteem level that person is. In this context, words, tone of voice and body language, which provide important clues as to the “self” of the person, are also an indication of how political leaders evaluate both “themselves” and “the mass/audience” in the communication they establish with their audiences. When the matter is taken from the perspective of Turkey, the levels of self-esteem in the relationships that the political leaders establish with the masses are also important in revealing how our society is seen from the perspective of a specific leader. Since the leader is a part of the marketing strategy of a political party as a product, this evaluation is significant in terms of the forms of relationships between political institutions in our country with the society. In this study, the self-esteem level in the documentary entitled “Master’s Story”, where Recep Tayyip Erdoğan’s life history is told, is analyzed in the context of words, tone of voice and body language. Within the scope of the study, at what level of self-esteem Recep Tayyip Erdoğan was in the “Master’s Story”, a documentary broadcast on Beyaz TV, was investigated using the content analysis method. First, based on the Functional Analysis Literature, a transactional approach scale was created regarding parent, adult and child self-esteem levels. On the basis of this scale, the prime minister’s self-esteem level was determined in three basic groups, namely “tone of voice”, “the words he used” and “body language”. Descriptive analyses were made to the data within the framework of these criteria and at what self-esteem level the prime minister spoke throughout the documentary was revealed.Keywords: political marketing, leader image, level of self-esteem, transactional approach
Procedia PDF Downloads 3341860 Near-Infrared Hyperspectral Imaging Spectroscopy to Detect Microplastics and Pieces of Plastic in Almond Flour
Authors: H. Apaza, L. Chévez, H. Loro
Abstract:
Plastic and microplastic pollution in human food chain is a big problem for human health that requires more elaborated techniques that can identify their presences in different kinds of food. Hyperspectral imaging technique is an optical technique than can detect the presence of different elements in an image and can be used to detect plastics and microplastics in a scene. To do this statistical techniques are required that need to be evaluated and compared in order to find the more efficient ones. In this work, two problems related to the presence of plastics are addressed, the first is to detect and identify pieces of plastic immersed in almond seeds, and the second problem is to detect and quantify microplastic in almond flour. To do this we make use of the analysis hyperspectral images taken in the range of 900 to 1700 nm using 4 unmixing techniques of hyperspectral imaging which are: least squares unmixing (LSU), non-negatively constrained least squares unmixing (NCLSU), fully constrained least squares unmixing (FCLSU), and scaled constrained least squares unmixing (SCLSU). NCLSU, FCLSU, SCLSU techniques manage to find the region where the plastic is found and also manage to quantify the amount of microplastic contained in the almond flour. The SCLSU technique estimated a 13.03% abundance of microplastics and 86.97% of almond flour compared to 16.66% of microplastics and 83.33% abundance of almond flour prepared for the experiment. Results show the feasibility of applying near-infrared hyperspectral image analysis for the detection of plastic contaminants in food.Keywords: food, plastic, microplastic, NIR hyperspectral imaging, unmixing
Procedia PDF Downloads 1271859 Two Fold Dimensional Analysis of Post-Employment Dissonance in Employer Branding Framework of it SMES
Authors: J. Janani, S. Gomathi
Abstract:
Despite the new economy is embodied with the ample size of talent pool, the corporate world is facing the hardship in the mismatch of talent demand supply. Therefore to combat with this fallout crisis, here depicts the relevance of Employer Branding. Employer branding is gaining its popularity in Large sized companies especially IT companies but less employer branding awareness among IT SMEs (Small and Medium size Enterprises). There are N range of analysis has been dole out on employer branding from different perspectives and in different industries. The hidden factor behind the employer branding namely the post employment dissonance was not given a lot of importance into the research picture. The present study examines the employer branding as the employer image and the organizational identity. It focuses on the two fold dimensional branding initiatives namely job offer attributes and organizational attractiveness. The study will depict the dissonance level and their variations among the foresaid initiatives from the former employees and the post-employment dissonance from the present employees in IT SMEs and it will also examine the employer perception from the prospective employees towards the stated branding initiatives. The demographic factors such as generational factors (gen X and gen Y) and the career stages are majorly focused in the study. The study will promote the IT SMEs to strengthen their employer branding effectively and efficiently through implementing varied strategies and this will help them to enhance the talent pool at their best. This will eventually result in talent attraction and talent retention.Keywords: employer image, organizational identity, post-employment dissonance, job offer attributes, organizational attractiveness, talent pool, career stages, generational factors, information technology, SMEs
Procedia PDF Downloads 4961858 A Comparative Study on Deep Learning Models for Pneumonia Detection
Authors: Hichem Sassi
Abstract:
Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.Keywords: deep learning, computer vision, pneumonia, models, comparative study
Procedia PDF Downloads 641857 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices
Authors: Zhuang Yiwen
Abstract:
The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms
Procedia PDF Downloads 761856 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production
Authors: Divya Vats, Sanjay Mahajani
Abstract:
The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity
Procedia PDF Downloads 701855 Substitutional Inference in Poetry: Word Choice Substitutions Craft Multiple Meanings by Inference
Authors: J. Marie Hicks
Abstract:
The art of the poetic conjoins meaning and symbolism with imagery and rhythm. Perhaps the reader might read this opening sentence as 'The art of the poetic combines meaning and symbolism with imagery and rhythm,' which holds a similar message, but is not quite the same. The reader understands that these factors are combined in this literary form, but to gain a sense of the conjoining of these factors, the reader is forced to consider that these aspects of poetry are not simply combined, but actually adjoin, abut, skirt, or touch in the poetic form. This alternative word choice is an example of substitutional inference. Poetry is, ostensibly, a literary form where language is used precisely or creatively to evoke specific images or emotions for the reader. Often, the reader can predict a coming rhyme or descriptive word choice in a poem, based on previous rhyming pattern or earlier imagery in the poem. However, there are instances when the poet uses an unexpected word choice to create multiple meanings and connections. In these cases, the reader is presented with an unusual phrase or image, requiring that they think about what that image is meant to suggest, and their mind also suggests the word they expected, creating a second, overlying image or meaning. This is what is meant by the term 'substitutional inference.' This is different than simply using a double entendre, a word or phrase that has two meanings, often one complementary and the other disparaging, or one that is innocuous and the other suggestive. In substitutional inference, the poet utilizes an unanticipated word that is either visually or phonetically similar to the expected word, provoking the reader to work to understand the poetic phrase as written, while unconsciously incorporating the meaning of the line as anticipated. In other words, by virtue of a word substitution, an inference of the logical word choice is imparted to the reader, while they are seeking to rationalize the word that was actually used. There is a substitutional inference of meaning created by the alternate word choice. For example, Louise Bogan, 4th Poet Laureate of the United States, used substitutional inference in the form of homonyms, malapropisms, and other unusual word choices in a number of her poems, lending depth and greater complexity, while actively engaging her readers intellectually with her poetry. Substitutional inference not only adds complexity to the potential interpretations of Bogan’s poetry, as well as the poetry of others, but provided a method for writers to infuse additional meanings into their work, thus expressing more information in a compact format. Additionally, this nuancing enriches the poetic experience for the reader, who can enjoy the poem superficially as written, or on a deeper level exploring gradations of meaning.Keywords: poetic inference, poetic word play, substitutional inference, word substitution
Procedia PDF Downloads 2361854 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques
Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri
Abstract:
Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology
Procedia PDF Downloads 1541853 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit
Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana
Abstract:
Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification
Procedia PDF Downloads 1531852 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection
Procedia PDF Downloads 1331851 Determination of ILSS of Composite Materials Using Micromechanical FEA Analysis
Authors: K. Rana, H.A.Saeed, S. Zahir
Abstract:
Inter Laminar Shear Stress (ILSS) is a main key parameter which quantify the properties of composite materials. These properties can ascertain the use of material for a specific purpose like aerospace, automotive etc. A modelling approach for determination of ILSS is presented in this paper. Geometric modelling of composite material is performed in TEXGEN software where reinforcement, cured matrix and their interfaces are modelled separately as per actual geometry. Mechanical properties of matrix and reinforcements are modelled separately which incorporated anisotropy in the real world composite material. ASTM D2344 is modelled in ANSYS for ILSS. In macroscopic analysis model approximates the anisotropy of the material and uses orthotropic properties by applying homogenization techniques. Shear Stress analysis in that case does not show the actual real world scenario and rather approximates it. In this paper actual geometry and properties of reinforcement and matrix are modelled to capture the actual stress state during the testing of samples as per ASTM standards. Testing of samples is also performed in order to validate the results. Fibre volume fraction of yarn is determined by image analysis of manufactured samples. Fibre volume fraction data is incorporated into the numerical model for correction of transversely isotropic properties of yarn. A comparison between experimental and simulated results is presented.Keywords: ILSS, FEA, micromechanical, fibre volume fraction, image analysis
Procedia PDF Downloads 3711850 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network
Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal
Abstract:
This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography
Procedia PDF Downloads 1411849 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 1591848 The Influence of Culture on Manifestations of Animus
Authors: Anahit Khananyan
Abstract:
The results of the long-term Jungian analysis with female clients from Eastern and Asian countries, which belong to collectivist cultures, are summarised in the article. The goal of the paper is to describe the cultural complex, which was found by the author in the analysis of women of collectivistic culture. It was named “the repression of Animus”. Generally, C.G.Jung himself and the Post-Jungians studied conditions caused by the possession by Animus. The conditions and cases of the repressed Animus, depending on the type of culture and cultural complexes, as we know, were not widely disseminated. C.G. Jung discovered and recognized the Animus as the second component of a pair of opposites of the psyche of women – femininity and Animus. In the way of individuation, an awareness of manifestations of Animus plays an important role: understanding the differences between negative and positive Animus as well as the Animus and the Shadow, then standing the tension of the presence of a pair of opposites - femininity and Animus, acceptance of the tension of them, finding the balance between them and reconciliation of this opposites. All of the above are steps towards the realization of the Animus, its release Animua, and the healing of the psyche. In the paper, the author will share her experience of analyzing the women of different collectivist cultures and her experience of recognizing the repressed Animus during the analysis. Also, she will describe some peculiarities of upbringing and cultural traditions, which reflected the cultural complex of repression of Animus. This complex is manifested in the traditions of girls' upbringing in accordance with which an image of a woman with overly developed femininity and an absence or underdeveloped Animus is idealized and encouraged as well as an evaluating attitude towards females who have to correspond to this image and fulfill the role prescribed in this way in the family and society.Keywords: analysis, cultural complex, animus, manifestation, culture
Procedia PDF Downloads 821847 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose
Procedia PDF Downloads 2191846 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 771845 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah
Abstract:
Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph
Procedia PDF Downloads 3051844 Soil Salinity from Wastewater Irrigation in Urban Greenery
Authors: H. Nouri, S. Chavoshi Borujeni, S. Anderson, S. Beecham, P. Sutton
Abstract:
The potential risk of salt leaching through wastewater irrigation is of concern for most local governments and city councils. Despite the necessity of salinity monitoring and management in urban greenery, most attention has been on agricultural fields. This study was defined to investigate the capability and feasibility of monitoring and predicting soil salinity using near sensing and remote sensing approaches using EM38 surveys, and high-resolution multispectral image of WorldView3. Veale Gardens within the Adelaide Parklands was selected as the experimental site. The results of the near sensing investigation were validated by testing soil salinity samples in the laboratory. Over 30 band combinations forming salinity indices were tested using image processing techniques. The outcomes of the remote sensing and near sensing approaches were compared to examine whether remotely sensed salinity indicators could map and predict the spatial variation of soil salinity through a potential statistical model. Statistical analysis was undertaken using the Stata 13 statistical package on over 52,000 points. Several regression models were fitted to the data, and the mixed effect modelling was selected the most appropriate one as it takes to account the systematic observation-specific unobserved heterogeneity. Results showed that SAVI (Soil Adjusted Vegetation Index) was the only salinity index that could be considered as a predictor for soil salinity but further investigation is needed. However, near sensing was found as a rapid, practical and realistically accurate approach for salinity mapping of heterogeneous urban vegetation.Keywords: WorldView3, remote sensing, EM38, near sensing, urban green spaces, green smart cities
Procedia PDF Downloads 1611843 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 3501842 American Slang: Perception and Connotations – Issues of Translation
Authors: Lison Carlier
Abstract:
The English language that is taught in school or used in media nowadays is defined as 'standard English,' although unstandardized Englishes, or 'parallel' Englishes, are practiced throughout the world. The existence of these 'parallel' Englishes has challenged standardization by imposing its own specific vocabulary or grammar. These non-standard languages tend to be regarded as inferior and, therefore, pose a problem regarding their translation. In the USA, 'slanguage', or slang, is a good example of a 'parallel' language. It consists of a particular set of vocabulary, used mostly in speech, and rarely in writing. Qualified as vulgar, often reduced to an urban language spoken by young people from lower classes, slanguage – or the language that is often first spoken between youths – is still the most common language used in the English-speaking world. Moreover, it appears that the prime meaning of 'informal' (as in an informal language) – a language that is spoken with persons the speaker knows – has been put aside and replaced in the general mind by the idea of vulgarity and non-appropriateness, when in fact informality is a sign of intimacy, not of vulgarity. When it comes to translating American slang, the main problem a translator encounters is the image and the cultural background usually associated with this 'parallel' language. Indeed, one will have, unwillingly, a predisposition to categorize a speaker of a 'parallel' language as being part of a particular group of people. The way one sees a speaker using it is paramount, and needs to be transposed into the target language. This paper will conduct an analysis of American slang – its use, perception and the image it gives of its speakers – and its translation into French, using the novel Is Everyone Hanging Out Without Me? (and other concerns) by way of example. In her autobiography/personal essay book, comedy writer, actress and author Mindy Kaling speaks with a very familiar English, including slang, which participates in the construction of her own voice and style, and enables a deeper connection with her readers.Keywords: translation, English, slang, French
Procedia PDF Downloads 3171841 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 1541840 QSAR Study and Haptotropic Rearrangement in Estradiol Derivatives
Authors: Mohamed Abd Esselem Dems, Souhila Laib, Nadjia Latelli, Nadia Ouddai
Abstract:
In this work, we have developed QSAR model for Relative Binding Affinity (RBA) of a large diverse set of estradiol among these derivatives, the organometallic derivatives. By dividing the dataset into a training set of 24 compounds and a test set of 6 compounds. The DFT method was used to calculate quantum chemical descriptors and physicochemical descriptors (MR and MLOGP) were performed using E-Dragon. All the validations indicated that the QSAR model built was robust and satisfactory (R2 = 90.12, Q2LOO = 86.61, RMSE = 0.272, F = 60.6473, Q2ext =86.07). We have therefore apply this model to predict the RBA, for two isomers β and α wherein Mn(CO)3 complex with the aromatic ring of estradiol, and the two isomers show little appreciation for the estrogenic receptor (RBAβ = 1.812 and RBAα = 1.741).Keywords: DFT, estradiol, haptotropic rearrangement, QSAR, relative binding affinity
Procedia PDF Downloads 2921839 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling
Procedia PDF Downloads 353