Search results for: 2D particle image velocimetry
2729 Silver Nanoparticle Application in Food Packaging and Impacts on Food Safety and Consumer’s Health
Authors: Worku Dejene Bekele, András Marczika Csilla Sörös
Abstract:
Silver nanoparticles are silver metal with a size of 1-100nm. The most common source of silver nanoparticles is inorganic salts. Nanoparticles can be ingested through our foods and constitute nanoparticles and silver ions, whether as an additive or by migrants and, in some cases, as a pollutant. Silver nanoparticles are the most widely applicable engineered nanomaterials, especially for antimicrobial function. Ag nanoparticles give different advantages in the case of food safety, quality, and overall acceptability; however, they affect the health of humans and animals, putting them at risk of health problems and environmental pollution. Silver nanoparticles have been used widely in food packaging technologies, especially in water treatments, meat and meat products, fruit, and many other food products. This is for bio-preservation from food products. The primary goal of this review is to determine the safety and health impact of Ag nanoparticles application in food packaging and analysis of the human organs more affected by this preservative technology, to assess the implications of a nanoparticle on food safety, to determine the effects of nanoparticles on consumers health and to determine the impact of nanotechnology on product acceptability. But currently, much research has demonstrated that there is cause to believe that silver nanoparticles may have toxicological effects on biological organs and systems. The silver nanoparticles affect DNA expression, gastrointestinal barriers, lungs, and other breathing organs illness. Silver particles and molecules are very toxic. During its application in food packaging, food industries used the thinnest particle. This particle can potentially affect the gastrointestinal tracts-it suffers from mucus production, DNA, lungs, and other breezing organs. This review is targeted to demonstrate the knowledge gap that industrials use in the application of silver nanoparticles in food packaging and preservation and its health effects on the consumer.Keywords: food preservatives, health impact, nanoparticle, silver nanoparticle
Procedia PDF Downloads 672728 Neural Network Approaches for Sea Surface Height Predictability Using Sea Surface Temperature
Authors: Luther Ollier, Sylvie Thiria, Anastase Charantonis, Carlos E. Mejia, Michel Crépon
Abstract:
Sea Surface Height Anomaly (SLA) is a signature of the sub-mesoscale dynamics of the upper ocean. Sea Surface Temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)◦ in the North Atlantic Ocean (26.5-44.42◦N, -64.25–41.83◦E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at five days by using the SST fields as additional information. We obtained predictions of a 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days), respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.Keywords: deep-learning, altimetry, sea surface temperature, forecast
Procedia PDF Downloads 892727 Understanding the Influence of Social Media on Individual’s Quality of Life Perceptions
Authors: Biljana Marković
Abstract:
Social networks are an integral part of our everyday lives, becoming an indispensable medium for communication in personal and business environments. New forms and ways of communication change the general mindset and significantly affect the quality of life of individuals. Quality of life is perceived as an abstract term, but often people are not aware that they directly affect the quality of their own lives, making minor but significant everyday choices and decisions. Quality of life can be defined broadly, but in the widest sense, it involves a subjective sense of satisfaction with one's life. Scientific knowledge about the impact of social networks on self-assessment of the quality of life of individuals is only just beginning to be researched. Available research indicates potential benefits as well as a number of disadvantages. In the context of the previous claims, the focus of the study conducted by the authors of this paper focuses on analyzing the impact of social networks on individual’s self-assessment of quality of life and the correlation between time spent on social networks, and the choice of content that individuals choose to share to present themselves. Moreover, it is aimed to explain how much and in what ways they critically judge the lives of others online. The research aspires to show the positive as well as negative aspects that social networks, primarily Facebook and Instagram, have on creating a picture of individuals and how they compare themselves with others. The topic of this paper is based on quantitative research conducted on a representative sample. An analysis of the results of the survey conducted online has elaborated a hypothesis which claims that content shared by individuals on social networks influences the image they create about themselves. A comparative analysis of the results obtained with the results of similar research has led to the conclusion about the synergistic influence of social networks on the feeling of the quality of life of respondents. The originality of this work is reflected in the approach of conducting research by examining attitudes about an individual's life satisfaction, the way he or she creates a picture of himself/herself through social networks, the extent to which he/she compares herself/himself with others, and what social media applications he/she uses. At the cognitive level, scientific contributions were made through the development of information concepts on quality of life, and at the methodological level through the development of an original methodology for qualitative alignment of respondents' attitudes using statistical analysis. Furthermore, at the practical level through the application of concepts in assessing the creation of self-image and the image of others through social networks.Keywords: quality of life, social media, self image, influence of social media
Procedia PDF Downloads 1272726 Preparation and Characterization of Chitosan Nanoparticles for Delivery of Oligonucleotides
Authors: Gyati Shilakari Asthana, Abhay Asthana, Dharm Veer Kohli, Suresh Prasad Vyas
Abstract:
Purpose: The therapeutic potential of oligonucleotide (ODN) is primarily dependent upon its safe and efficient delivery to specific cells overcoming degradation and maximizing cellular uptake in vivo. The present study is focused to design low molecular weight chitosan nanoconstructs to meet the requirements of safe and effectual delivery of ODNs. LMW-chitosan is a biodegradable, water soluble, biocompatible polymer and is useful as a non-viral vector for gene delivery due to its better stability in water. Methods: LMW chitosan ODN nanoparticles (CHODN NPs) were formulated by self-assembled method using various N/P ratios (moles ratio of amine groups of CH to phosphate moieties of ODNs; 0.5:1, 1:1, 3:1, 5:1, and 7:1) of CH to ODN. The developed CHODN NPs were evaluated with respect to gel retardation assay, particle size, zeta potential and cytotoxicity and transfection efficiency. Results: Complete complexation of CH/ODN was achieved at the charge ratio of 0.5:1 or above and CHODN NPs displayed resistance against DNase I. On increasing the N/P ratio of CH/ODN, the particle size of the NPs decreased whereas zeta potential (ZV) value increased. No significant toxicity was observed at all CH concentrations. The transfection efficiency was increased on increasing N/P ratio from 1:1 to 3:1, whereas it was decreased with further increment in N/P ratio upto 7:1. Maximum transfection of CHODN NPs with both the cell lines (Raw 267.4 cells and Hela cells) was achieved at N/P ratio of 3:1. The results suggest that transfection efficiency of CHODN NPs is dependent on N/P ratio. Conclusion: Thus the present study states that LMW chitosan nanoparticulate carriers would be acceptable choice to improve transfection efficiency in vitro as well as in vivo delivery of oligonucleotide.Keywords: LMW-chitosan, chitosan nanoparticles, biocompatibility, cytotoxicity study, transfection efficiency, oligonucleotide
Procedia PDF Downloads 8482725 The Effects of 2016 Rio Olympics as Nation's Soft Power Strategy
Authors: Keunsu Han
Abstract:
Sports has been used as a valuable tool for countries to enhance brand image and to pursue higher political interests. Olympic games are one of the best examples as a mega sport event to achieve such nations’ purposes. The term, “soft power,” coined by Nye, refers to country’s ability to persuade and attract foreign audiences through non-coercive ways such as cultural, diplomatic, and economic means. This concept of soft power provides significant answers about why countries are willing to host a mega sport event such as Olympics. This paper reviews the concept of soft power by Nye as a theoretical framework of this study to understand critical motivation for countries to host Olympics and examines the effects of 2016 Rio Olympics as the state’s soft power strategy. Thorough data analysis including media, government and private-sector documents, this research analyzes both negative and positive aspects of the nation’s image created during Rio Olympics and discusses the effects of Rio Olympics as Brazil’s chance to showcase its soft power by highlighting the best the state has to present.Keywords: country brand, olympics, soft power, sport diplomacy, mega sport event
Procedia PDF Downloads 4582724 Spectral Mixture Model Applied to Cannabis Parcel Determination
Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara
Abstract:
Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.Keywords: Gaussian mixture discriminant analysis, spectral mixture model, Worldview-2, land parcels
Procedia PDF Downloads 1942723 ARABEX: Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder and Custom Convolutional Recurrent Neural Network
Authors: Hozaifa Zaki, Ghada Soliman
Abstract:
In this paper, we introduced an approach for Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder (ARABEX) with bidirectional LSTM. This approach is used for translating the Arabic dot-matrix expiration dates into their corresponding filled-in dates. A custom lightweight Convolutional Recurrent Neural Network (CRNN) model is then employed to extract the expiration dates. Due to the lack of available dataset images for the Arabic dot-matrix expiration date, we generated synthetic images by creating an Arabic dot-matrix True Type Font (TTF) matrix to address this limitation. Our model was trained on a realistic synthetic dataset of 3287 images, covering the period from 2019 to 2027, represented in the format of yyyy/mm/dd. We then trained our custom CRNN model using the generated synthetic images to assess the performance of our model (ARABEX) by extracting expiration dates from the translated images. Our proposed approach achieved an accuracy of 99.4% on the test dataset of 658 images, while also achieving a Structural Similarity Index (SSIM) of 0.46 for image translation on our dataset. The ARABEX approach demonstrates its ability to be applied to various downstream learning tasks, including image translation and reconstruction. Moreover, this pipeline (ARABEX+CRNN) can be seamlessly integrated into automated sorting systems to extract expiry dates and sort products accordingly during the manufacturing stage. By eliminating the need for manual entry of expiration dates, which can be time-consuming and inefficient for merchants, our approach offers significant results in terms of efficiency and accuracy for Arabic dot-matrix expiration date recognition.Keywords: computer vision, deep learning, image processing, character recognition
Procedia PDF Downloads 812722 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA
Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell
Abstract:
Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis
Procedia PDF Downloads 2292721 Generation of Charged Nanoparticles and Their Contribution to the Thin Film and Nanowire Growth during Chemical Vapour Deposition
Authors: Seung-Min Yang, Seong-Han Park, Sang-Hoon Lee, Seung-Wan Yoo, Chan-Soo Kim, Nong-Moon Hwang
Abstract:
The theory of charged nanoparticles suggested that in many Chemical Vapour Depositions (CVD) processes, Charged Nanoparticles (CNPs) are generated in the gas-phase and become a building block of thin films and nanowires. Recently, the nanoparticle-based crystallization has become a big issue since the growth of nanorods or crystals by the building block of nanoparticles was directly observed by transmission electron microscopy observations in the liquid cell. In an effort to confirm charged gas-phase nuclei, that might be generated under conventional processing conditions of thin films and nanowires during CVD, we performed an in-situ measurement using differential mobility analyser and particle beam mass spectrometer. The size distribution and number density of CNPs were affected by process parameters such as precursor flow rate and working temperature. It was shown that many films and nanostructures, which have been believed to grow by individual atoms or molecules, actually grow by the building blocks of such charged nuclei. The electrostatic interaction between CNPs and the growing surface induces the self-assembly into films and nanowires. In addition, the charge-enhanced atomic diffusion makes CNPs liquid-like quasi solid. As a result, CNPs tend to land epitaxial on the growing surface, which results in the growth of single crystalline nanowires with a smooth surface.Keywords: chemical vapour deposition, charged nanoparticle, electrostatic force, nanostructure evolution, differential mobility analyser, particle beam mass spectrometer
Procedia PDF Downloads 4502720 A Sociological Exploration of How Chinese Highly Educated Women Respond to the Gender Stereotype in China
Authors: Qian Wang
Abstract:
In this study, Chinese highly educated women referred to those women who are currently doing their Ph.D. studies, and those who have already had Ph.D. degrees. In ancient Chinese society, women were subordinated to men. The only gender role of women was to be a wife and a mother. With the rapid development of China, women are encouraged to pursue higher education. As a result of this, the number of highly educated women is growing very quickly. However, people, especially men, believe that highly educated women are challenging the traditional image of Chinese women. It is thus believed that highly educated women are very different with the traditional women. They are demonstrating an image of independent and confident women with promising careers. Plus, with the reinforcement of mass media, highly educated women are regarded as non-traditional women. People stigmatize them as the 'third gender' on the basis of male and female. Now, the 'third gender' has become a gender stereotype of highly educated women. In this study, 20 participants were interviewed to explore their perceptions of self and how these highly educated women respond to the stereotype. The study finds that Chinese highly educated women are facing a variety of problems and difficulties in their daily life, and they believe that one of the leading causes is the contradiction between patriarchal values and the views of gender equality in contemporary China. This study gives rich qualitative data in the research of Chinese women and will help to extend the current Chinese gender studies.Keywords: Chinese highly educated women, gender stereotype, self, the ‘third gender’
Procedia PDF Downloads 1932719 Re-Presenting the Egyptian Informal Urbanism in Films between 1994 and 2014
Authors: R. Mofeed, N. Elgendy
Abstract:
Cinema constructs mind-spaces that reflect inherent human thoughts and emotions. As a representational art, Cinema would introduce comprehensive images of life phenomena in different ways. The term “represent” suggests verity of meanings; bring into presence, replace or typify. In that sense, Cinema may present a phenomenon through direct embodiment, or introduce a substitute image that replaces the original phenomena, or typify it by relating the produced image to a more general category through a process of abstraction. This research is interested in questioning the type of images that Egyptian Cinema introduces to informal urbanism and how these images were conditioned and reshaped in the last twenty years. The informalities/slums phenomenon first appeared in Egypt and, particularly, Cairo in the early sixties, however, this phenomenon was completely ignored by the state and society until the eighties, and furthermore, its evident representation in Cinema was by the mid-nineties. The Informal City represents the illegal housing developments, and it is a fast growing form of urbanization in Cairo. Yet, this expanding phenomenon is still depicted as the minority, exceptional and marginal through the Cinematic lenses. This paper aims at tracing the forms of representations of the urban informalities in the Egyptian Cinema between 1994 and 2014, and how did that affect the popular mind and its perception of these areas. The paper runs two main lines of inquiry; the first traces the phenomena through a chronological and geographical mapping of the informal urbanism has been portrayed in films. This analysis is based on an academic research work at Cairo University in Fall 2014. The visual tracing through maps and timelines allowed a reading of the phases of ignorance, presence, typifying and repetition in the representation of this huge sector of the city through more than 50 films that has been investigated. The analysis clearly revealed the “portrayed image” of informality by the Cinema through the examined period. However, the second part of the paper explores the “perceived image”. A designed questionnaire is applied to highlight the main features of that image that is perceived by both inhabitants of informalities and other Cairenes based on watching selected films. The questionnaire covers the different images of informalities proposed in the Cinema whether in a comic or a melodramatic background and highlight the descriptive terms used, to see which of them resonate with the mass perceptions and affected their mental images. The two images; “portrayed” and “perceived” are then to be encountered to reflect on issues of repetitions, stereotyping and reality. The formulated stereotype of informal urbanism is finally outlined and justified in relation to both production consumption mechanisms of films and the State official vision of informalities.Keywords: cinema, informal urbanism, popular mind, representation
Procedia PDF Downloads 2942718 Multi Universe Existence Based-On Quantum Relativity using DJV Circuit Experiment Interpretation
Authors: Muhammad Arif Jalil, Somchat Sonasang, Preecha Yupapin
Abstract:
This study hypothesizes that the universe is at the center of the universe among the white and black holes, which are the entangled pairs. The coupling between them is in terms of spacetime forming the universe and things. The birth of things is based on exchange energy between the white and black sides. That is, the transition from the white side to the black side is called wave-matter, where it has a speed faster than light with positive gravity. The transition from the black to the white side has a speed faster than light with negative gravity called a wave-particle. In the part where the speed is equal to light, the particle rest mass is formed. Things can appear to take shape here. Thus, the gravity is zero because it is the center. The gravitational force belongs to the Earth itself because it is in a position that is twisted towards the white hole. Therefore, it is negative. The coupling of black-white holes occurs directly on both sides. The mass is formed at the saturation and will create universes and other things. Therefore, it can be hundreds of thousands of universes on both sides of the B and white holes before reaching the saturation point of multi-universes. This work will use the DJV circuit that the research team made as an entangled or two-level system circuit that has been experimentally demonstrated. Therefore, this principle has the possibility for interpretation. This work explains the emergence of multiple universes and can be applied as a practical guideline for searching for universes in the future. Moreover, the results indicate that the DJV circuit can create the elementary particles according to Feynman's diagram with rest mass conditions, which will be discussed for fission and fusion applications.Keywords: multi-universes, feynman diagram, fission, fusion
Procedia PDF Downloads 622717 Feature Weighting Comparison Based on Clustering Centers in the Detection of Diabetic Retinopathy
Authors: Kemal Polat
Abstract:
In this paper, three feature weighting methods have been used to improve the classification performance of diabetic retinopathy (DR). To classify the diabetic retinopathy, features extracted from the output of several retinal image processing algorithms, such as image-level, lesion-specific and anatomical components, have been used and fed them into the classifier algorithms. The dataset used in this study has been taken from University of California, Irvine (UCI) machine learning repository. Feature weighting methods including the fuzzy c-means clustering based feature weighting, subtractive clustering based feature weighting, and Gaussian mixture clustering based feature weighting, have been used and compered with each other in the classification of DR. After feature weighting, five different classifier algorithms comprising multi-layer perceptron (MLP), k- nearest neighbor (k-NN), decision tree, support vector machine (SVM), and Naïve Bayes have been used. The hybrid method based on combination of subtractive clustering based feature weighting and decision tree classifier has been obtained the classification accuracy of 100% in the screening of DR. These results have demonstrated that the proposed hybrid scheme is very promising in the medical data set classification.Keywords: machine learning, data weighting, classification, data mining
Procedia PDF Downloads 3242716 Semi-Automatic Segmentation of Mitochondria on Transmission Electron Microscopy Images Using Live-Wire and Surface Dragging Methods
Authors: Mahdieh Farzin Asanjan, Erkan Unal Mumcuoglu
Abstract:
Mitochondria are cytoplasmic organelles of the cell, which have a significant role in the variety of cellular metabolic functions. Mitochondria act as the power plants of the cell and are surrounded by two membranes. Significant morphological alterations are often due to changes in mitochondrial functions. A powerful technique in order to study the three-dimensional (3D) structure of mitochondria and its alterations in disease states is Electron microscope tomography. Detection of mitochondria in electron microscopy images due to the presence of various subcellular structures and imaging artifacts is a challenging problem. Another challenge is that each image typically contains more than one mitochondrion. Hand segmentation of mitochondria is tedious and time-consuming and also special knowledge about the mitochondria is needed. Fully automatic segmentation methods lead to over-segmentation and mitochondria are not segmented properly. Therefore, semi-automatic segmentation methods with minimum manual effort are required to edit the results of fully automatic segmentation methods. Here two editing tools were implemented by applying spline surface dragging and interactive live-wire segmentation tools. These editing tools were applied separately to the results of fully automatic segmentation. 3D extension of these tools was also studied and tested. Dice coefficients of 2D and 3D for surface dragging using splines were 0.93 and 0.92. This metric for 2D and 3D for live-wire method were 0.94 and 0.91 respectively. The root mean square symmetric surface distance values of 2D and 3D for surface dragging was measured as 0.69, 0.93. The same metrics for live-wire tool were 0.60 and 2.11. Comparing the results of these editing tools with the results of automatic segmentation method, it shows that these editing tools, led to better results and these results were more similar to ground truth image but the required time was higher than hand-segmentation timeKeywords: medical image segmentation, semi-automatic methods, transmission electron microscopy, surface dragging using splines, live-wire
Procedia PDF Downloads 1682715 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 3772714 Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork
Authors: Toni Maristela C. Estabillo, Michaela V. Matienzo, Mikaela L. Sabangan, Rosette M. Tienzo, Justine L. Bahinting
Abstract:
This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.Keywords: blind watermarking, discrete wavelet transform algorithm, patchwork algorithm, digital watermark
Procedia PDF Downloads 2672713 Surface Defect-engineered Ceo₂−x by Ultrasound Treatment for Superior Photocatalytic H₂ Production and Water Treatment
Authors: Nabil Al-Zaqri
Abstract:
Semiconductor photocatalysts with surface defects display incredible light absorption bandwidth, and these defects function as highly active sites for oxidation processes by interacting with the surface band structure. Accordingly, engineering the photocatalyst with surface oxygen vacancies will enhance the semiconductor nanostructure's photocatalytic efficiency. Herein, a CeO2₋ₓ nanostructure is designed under the influence of low-frequency ultrasonic waves to create surface oxygen vacancies. This approach enhances the photocatalytic efficiency compared to many heterostructures while keeping the intrinsiccrystal structure intact. Ultrasonic waves induce the acoustic cavitation effect leading to the dissemination of active elements on the surface, which results in vacancy formation in conjunction with larger surface area and smaller particle size. The structural analysis of CeO₂₋ₓ revealed higher crystallinity, as well as morphological optimization, and the presence of oxygen vacancies is verified through Raman, X-rayphotoelectron spectroscopy, temperature-programmed reduction, photoluminescence, and electron spinresonance analyses. Oxygen vacancies accelerate the redox cycle between Ce₄+ and Ce₃+ by prolongingphotogenerated charge recombination. The ultrasound-treated pristine CeO₂ sample achieved excellenthydrogen production showing a quantum efficiency of 1.125% and efficient organic degradation. Ourpromising findings demonstrated that ultrasonic treatment causes the formation of surface oxygenvacancies and improves photocatalytic hydrogen evolution and pollution degradation. Conclusion: Defect engineering of the ceria nanoparticles with oxygen vacancies was achieved for the first time using low-frequency ultrasound treatment. The U-CeO₂₋ₓsample showed high crystallinity, and morphological changes were observed. Due to the acoustic cavitation effect, a larger surface area and small particle size were observed. The ultrasound treatment causes particle aggregation and surface defects leading to oxygen vacancy formation. The XPS, Raman spectroscopy, PL spectroscopy, and ESR results confirm the presence of oxygen vacancies. The ultrasound-treated sample was also examined for pollutant degradation, where 1O₂was found to be the major active species. Hence, the ultrasound treatment influences efficient photocatalysts for superior hydrogen evolution and an excellent photocatalytic degradation of contaminants. The prepared nanostructure showed excellent stability and recyclability. This work could pave the way for a unique post-synthesis strategy intended for efficient photocatalytic nanostructures.Keywords: surface defect, CeO₂₋ₓ, photocatalytic, water treatment, H₂ production
Procedia PDF Downloads 1392712 Effect of Organophilic Clay on the Stability and Rheological Behavior of Oil-Based Drilling Muds
Authors: Hammadi Larbi
Abstract:
The major problem with oil-based drilling muds (reverse emulsions) is their thermodynamic instability and their high tendency to coalescence over time, irreversibly leading to destabilization. Water/Oil reverse emulsion drilling Muds are highly recommended when significant depths are reached. This study aimed to contribute experimentally to the knowledge of the structure (stability) and rheological behavior of drilling mud systems based on water/crude oil inverse emulsions through the investigation of the effect of organophilic clay. The chemical composition of organophilic clay such as VG69 shows a strong presence of silicon oxide (SiO2), followed by aluminum oxide (Al2O3), so these two elements are considered to be the main constituents of organophilic clays. The study also shows that the SiO2/Al2O3 ratio is equal to 3.52, which can be explained by the high content of free silica contained in the organophile clay used. The particle size analysis of the organophilic clays showed that the size of the of the particles analysed is in the range of 30 to 80 μm, this result ensures the correct particle size quality of organophilic clays and allows these powders to be used in Drilling mud systems.The experimental data of steady-state flow measurements are analyzed in the classic way by the Herschel-Bulkley model. Microscopic observation shows that the addition of quantities of organophilic clay type VG69 less than or equal to 3 g leading to the stability of the water/oil inverse emulsions, on the other hand, for quantities greater than 3 g, the emulsions are destabilized. The results obtained also showed that adding 3 g of organophilic clay to the crude oil drilling mud improves their stability by 70%.Keywords: drilling muds, inverse emulsions, rheological behavior, yield stress, stability, organophilic clay
Procedia PDF Downloads 72711 Molecular Dynamic Simulation of Cold Spray Process
Authors: Aneesh Joshi, Sagil James
Abstract:
Cold Spray (CS) process is deposition of solid particles over a substrate above a certain critical impact velocity. Unlike thermal spray processes, CS process does not melt the particles thus retaining their original physical and chemical properties. These characteristics make CS process ideal for various engineering applications involving metals, polymers, ceramics and composites. The bonding mechanism involved in CS process is extremely complex considering the dynamic nature of the process. Though CS process offers great promise for several engineering applications, the realization of its full potential is limited by the lack of understanding of the complex mechanisms involved in this process and the effect of critical process parameters on the deposition efficiency. The goal of this research is to understand the complex nanoscale mechanisms involved in CS process. The study uses Molecular Dynamics (MD) simulation technique to understand the material deposition phenomenon during the CS process. Impact of a single crystalline copper nanoparticle on copper substrate is modelled under varying process conditions. The quantitative results of the impacts at different velocities, impact angle and size of the particles are evaluated using flattening ratio, von Mises stress distribution and local shear strain. The study finds that the flattening ratio and hence the quality of deposition was highest for an impact velocity of 700 m/s, particle size of 20 Å and an impact angle of 90°. The stress and strain analysis revealed regions of shear instabilities in the periphery of impact and also revealed plastic deformation of the particles after the impact. The results of this study can be used to augment our existing knowledge in the field of CS processes.Keywords: cold spray process, molecular dynamics simulation, nanoparticles, particle impact
Procedia PDF Downloads 3652710 Extending the Theory of Planned Behaviour to Predict Intention to Commute by Bicycle: Case Study of Mexico City
Authors: Magda Cepeda, Frances Hodgson, Ann Jopson
Abstract:
There are different barriers people face when choosing to cycle for commuting purposes. This study examined the role of psycho-social factors predicting the intention to cycle to commute in Mexico City. An extended version of the theory of planned behaviour was developed and utilized with a simple random sample of 401 road users. We applied exploratory and confirmatory factor analysis and after identifying five factors, a structural equation model was estimated to find the relationships among the variables. The results indicated that cycling attributes, attitudes to cycling, social comparison and social image and prestige were the most important factors influencing intention to cycle. Although the results from this study are specific to Mexico City, they indicate areas of interest to transportation planners in other regions especially in those cities where intention to cycle its linked to its perceived image and there is political ambition to instigate positive cycling cultures. Moreover, this study contributes to the current literature developing applications of the Theory of Planned Behaviour.Keywords: cycling, latent variable model, perception, theory of planned behaviour
Procedia PDF Downloads 3532709 KCBA, A Method for Feature Extraction of Colonoscopy Images
Authors: Vahid Bayrami Rad
Abstract:
In recent years, the use of artificial intelligence techniques, tools, and methods in processing medical images and health-related applications has been highlighted and a lot of research has been done in this regard. For example, colonoscopy and diagnosis of colon lesions are some cases in which the process of diagnosis of lesions can be improved by using image processing and artificial intelligence algorithms, which help doctors a lot. Due to the lack of accurate measurements and the variety of injuries in colonoscopy images, the process of diagnosing the type of lesions is a little difficult even for expert doctors. Therefore, by using different software and image processing, doctors can be helped to increase the accuracy of their observations and ultimately improve their diagnosis. Also, by using automatic methods, the process of diagnosing the type of disease can be improved. Therefore, in this paper, a deep learning framework called KCBA is proposed to classify colonoscopy lesions which are composed of several methods such as K-means clustering, a bag of features and deep auto-encoder. Finally, according to the experimental results, the proposed method's performance in classifying colonoscopy images is depicted considering the accuracy criterion.Keywords: colorectal cancer, colonoscopy, region of interest, narrow band imaging, texture analysis, bag of feature
Procedia PDF Downloads 522708 Impact of Length of Straw by the Use of a Straw Mill on the Selective Feeding of Young Cattle and Their Effects for the Cattle
Authors: Heiko Scholz
Abstract:
When feeding high qualitysilagetoheifersfromthe age of two, there is a riskofenergyoversupply. Depending on the feeding valueorscarceavailability ofsilageorcorn silage diets withhighproportionsof straw is often incorporated. Foran energetically standardized young cattle supply of strawproportion can be more than 20% of dry matter. It was investigated whether the grinding of straw with the strawmillselective feeding significantly limits. The investigation has been carried out with young cattle in the second year. 78 animals were kept and fed under similar conditions in two groups. The experimental group (EG) consisted of cattle 12 to 15 months, and in the control group (CG), the cattle were 15 to 20 months old. The experimental feeding took place in five days of feed distribution, and residual feed were weighed. The ration of EG contained ground with the straw mill straw, and CG was further fed rotor-cut pressed straw. To determine the selective seizure samples of feed distributionandtheremainingfood with the particle separator boxandthecrude protein-and energy-content have been determined. The grinding of the straw increased the daily feed intake.IntheEGan increase infeed intakewas observedby grinding of the straw. Feed intakedirectlyon the day for changing the dietoflongonground straw increased by more than 2.0 kgofDMper animal. In the following days, the feed intakewasincreasedby 0.9kg DMper animal and day on average (7.4 vs. 8.3 kg DM per day). The results of the screen distribution of residual feed point to a differentiated feeding behavior between the groups. In the EG, the particle length of the residual feed to a large extent with the template matches. The acid-base-balance (NSBA)valuesofEGarewithin normal limits. Ifstrawsharesof25% and more are federations to young cattle (heifers), the theparticlelengthof straw has significant impact ontheselectivefeeding behavior. Aparticlelength of 1.5cmcompared to7.5 cmlongpreventedstrawcertainly discarding of the straw on the feeding barn. The feed intake increases whenshortstrawis mixed into theTMR.Keywords: straw mill, heifer, feed selection, dry matter intake
Procedia PDF Downloads 2012707 Meta Mask Correction for Nuclei Segmentation in Histopathological Image
Authors: Jiangbo Shi, Zeyu Gao, Chen Li
Abstract:
Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data.Keywords: deep learning, histopathological image, meta-learning, nuclei segmentation, weak annotations
Procedia PDF Downloads 1382706 Gold Nano Particle as a Colorimetric Sensor of HbA0 Glycation Products
Authors: Ranjita Ghoshmoulick, Aswathi Madhavan, Subhavna Juneja, Prasenjit Sen, Jaydeep Bhattacharya
Abstract:
Type 2 diabetes mellitus (T2DM) is a very complex and multifactorial metabolic disease where the blood sugar level goes up. One of the major consequence of this elevated blood sugar is the formation of AGE (Advance Glycation Endproducts), from a series of chemical or biochemical reactions. AGE are detrimental because it leads to severe pathogenic complications. They are a group of structurally diverse chemical compounds formed from nonenzymatic reactions between the free amino groups (-NH2) of proteins and carbonyl groups (>C=O) of reducing sugars. The reaction is known as Maillard Reaction. It starts with the formation of reversible schiff’s base linkage which after sometime rearranges itself to form Amadori Product along with dicarbonyl compounds. Amadori products are very unstable hence rearrangement goes on until stable products are formed. During the course of the reaction a lot of chemically unknown intermediates and reactive byproducts are formed that can be termed as Early Glycation Products. And when the reaction completes, structurally stable chemical compounds are formed which is termed as Advanced Glycation Endproducts. Though all glycation products have not been characterized well, some fluorescence compounds e.g pentosidine, Malondialdehyde (MDA) or carboxymethyllysine (CML) etc as AGE and α-dicarbonyls or oxoaldehydes such as 3-deoxyglucosone (3-DG) etc as the intermediates have been identified. In this work Gold NanoParticle (GNP) was used as an optical indicator of glycation products. To achieve faster glycation kinetics and high AGE accumulation, fructose was used instead of glucose. Hemoglobin A0 (HbA0) was fructosylated by in-vitro method. AGE formation was measured fluorimetrically by recording emission at 450nm upon excitation at 350nm. Thereafter this fructosylated HbA0 was fractionated by column chromatography. Fractionation separated the proteinaceous substance from the AGEs. Presence of protein part in the fractions was confirmed by measuring the intrinsic protein fluorescence and Bradford reaction. GNPs were synthesized using the templates of chromatographically separated fractions of fructosylated HbA0. Each fractions gave rise to GNPs of varying color, indicating the presence of distinct set of glycation products differing structurally and chemically. Clear solution appeared due to settling down of particles in some vials. The reactive groups of the intermediates kept the GNP formation mechanism on and did not lead to a stable particle formation till Day 10. Whereas SPR of GNP showed monotonous colour for the fractions collected in case of non fructosylated HbA0. Our findings accentuate the use of GNPs as a simple colorimetric sensing platform for the identification of intermediates of glycation reaction which could be implicated in the prognosis of the associated health risk due to T2DM and others.Keywords: advance glycation endproducts, glycation, gold nano particle, sensor
Procedia PDF Downloads 3022705 An Efficient Architecture for Dynamic Customization and Provisioning of Virtual Appliance in Cloud Environment
Authors: Rajendar Kandan, Mohammad Zakaria Alli, Hong Ong
Abstract:
Cloud computing is a business model which provides an easier management of computing resources. Cloud users can request virtual machine and install additional softwares and configure them if needed. However, user can also request virtual appliance which provides a better solution to deploy application in much faster time, as it is ready-built image of operating system with necessary softwares installed and configured. Large numbers of virtual appliances are available in different image format. User can download available appliances from public marketplace and start using it. However, information published about the virtual appliance differs from each providers leading to the difficulty in choosing required virtual appliance as it is composed of specific OS with standard software version. However, even if user choses the appliance from respective providers, user doesn’t have any flexibility to choose their own set of softwares with required OS and application. In this paper, we propose a referenced architecture for dynamically customizing virtual appliance and provision them in an easier manner. We also add our experience in integrating our proposed architecture with public marketplace and Mi-Cloud, a cloud management software.Keywords: cloud computing, marketplace, virtualization, virtual appliance
Procedia PDF Downloads 2932704 Characterization of Fine Particles Emitted by the Inland and Maritime Shipping
Authors: Malika Souada, Juanita Rausch, Benjamin Guinot, Christine Bugajny
Abstract:
The increase of global commerce and tourism makes the shipping sector an important contributor of atmospheric pollution. Both, airborne particles and gaseous pollutants have negative impact on health and climate. This is especially the case in port cities, due to the proximity of the exposed population to the shipping emissions in addition to other multiple sources of pollution linked to the surrounding urban activity. The objective of this study is to determine the concentrations of fine particles (immission), specifically PM2.5, PM1, PM0.3, BC and sulphates, in a context where maritime passenger traffic plays an important role (port area of Bordeaux centre). The methodology is based on high temporal resolution measurements of pollutants, correlated with meteorological and ship movements data. Particles and gaseous pollutants from seven maritime passenger ships were sampled and analysed during the docking, manoeuvring and berthing phases. The particle mass measurements were supplemented by measurements of the number concentration of ultrafine particles (<300 nm diameter). The different measurement points were chosen by taking into account the local meteorological conditions and by pre-modelling the dispersion of the smoke plumes. The results of the measurement campaign carried out during the summer of 2021 in the port of Bordeaux show that the detection of concentrations of particles emitted by ships proved to be punctual and stealthy. Punctual peaks of ultrafine particle concentration in number (P#/m3) and BC (ng/m3) were measured during the docking phases of the ships, but the concentrations returned to their background level within minutes. However, it appears that the influence of the docking phases does not significantly affect the air quality of Bordeaux centre in terms of mass concentration. Additionally, no clear differences in PM2.5 concentrations between the periods with and without ships at berth were observed. The urban background pollution seems to be mainly dominated by exhaust and non-exhaust road traffic emissions. However, temporal high-resolution measurements suggest a probable emission of gaseous precursors responsible for the formation of secondary aerosols related to the ship activities. This was evidenced by the high values of the PM1/BC and PN/BC ratios, tracers of non-primary particle formation, during periods of ship berthing vs. periods without ships at berth. The research findings from this study provide robust support for port area air quality assessment and source apportionment.Keywords: characterization, fine particulate matter, harbour air quality, shipping impacts
Procedia PDF Downloads 1042703 Integrated Design of Froth Flotation Process in Sludge Oil Recovery Using Cavitation Nanobubbles for Increase the Efficiency and High Viscose Compatibility
Authors: Yolla Miranda, Marini Altyra, Karina Kalmapuspita Imas
Abstract:
Oily sludge wastes always fill in upstream and downstream petroleum industry process. Sludge still contains oil that can use for energy storage. Recycling sludge is a method to handling it for reduce the toxicity and very probable to get the remaining oil around 20% from its volume. Froth flotation, a common method based on chemical unit for separate fine solid particles from an aqueous suspension. The basic composition of froth flotation is the capture of oil droplets or small solids by air bubbles in an aqueous slurry, followed by their levitation and collection in a froth layer. This method has been known as no intensive energy requirement and easy to apply. But the low efficiency and unable treat the high viscosity become the biggest problem in froth flotation unit. This study give the design to manage the high viscosity of sludge first and then entering the froth flotation including cavitation tube on it to change the bubbles into nano particles. The recovery in flotation starts with the collision and adhesion of hydrophobic particles to the air bubbles followed by transportation of the hydrophobic particle-bubble aggregate from the collection zone to the froth zone, drainage and enrichment of the froth, and finally by its overflow removal from the cell top. The effective particle separation by froth flotation relies on the efficient capture of hydrophobic particles by air bubbles in three steps. The important step is collision. Decreasing the bubble particles will increasing the collision effect. It cause the process more efficient. The pre-treatment, froth flotation, and cavitation tube integrated each other. The design shows the integrated unit and its process.Keywords: sludge oil recovery, froth flotation, cavitation tube, nanobubbles, high viscosity
Procedia PDF Downloads 3772702 Classifications of Images for the Recognition of People’s Behaviors by SIFT and SVM
Authors: Henni Sid Ahmed, Belbachir Mohamed Faouzi, Jean Caelen
Abstract:
Behavior recognition has been studied for realizing drivers assisting system and automated navigation and is an important studied field in the intelligent Building. In this paper, a recognition method of behavior recognition separated from a real image was studied. Images were divided into several categories according to the actual weather, distance and angle of view etc. SIFT was firstly used to detect key points and describe them because the SIFT (Scale Invariant Feature Transform) features were invariant to image scale and rotation and were robust to changes in the viewpoint and illumination. My goal is to develop a robust and reliable system which is composed of two fixed cameras in every room of intelligent building which are connected to a computer for acquisition of video sequences, with a program using these video sequences as inputs, we use SIFT represented different images of video sequences, and SVM (support vector machine) Lights as a programming tool for classification of images in order to classify people’s behaviors in the intelligent building in order to give maximum comfort with optimized energy consumption.Keywords: video analysis, people behavior, intelligent building, classification
Procedia PDF Downloads 3772701 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 722700 Lobbyists’ Competencies as a Basis for Shaping the Positive Image of Modern Lobbying
Authors: Joanna Dzieńdziora
Abstract:
Lobbying is an instrument of influence in various decision-making processes. It is also the underestimated issue as a research problem. The lack of research on the modern lobbyist competencies is the most crucial element. The paper presents attempts of finding answers to the following questions: Who should run the lobbying activity? What competencies should a lobbyist possess in order to implement lobbying activities effectively? Searching for answers for the mentioned above questions requires positioning the opportunity to change the image of lobbying in the area of competencies of entities that provide lobbying activities. The aim of the paper is presenting the lobbyist competencies profile in the framework of his professional role. The essence of lobbying activity and its significance in the modern economy as well as areas, the scope of lobbying activities, diagnosis of a modern lobbyist’s competences, lobbyist’s competencies profile that is focused on the professionalization of the lobbying activity, will have been presented in this paper. Indicated research tasks let emerge lobbyist’s competencies in the way that allows identifying and elaborating the lobbyist competencies profile. The profile lets improve lobbying activities. Its elaboration is based on the author’s research results analysis. Taking into consideration the shortages within the theory and research on the lobbying activity, the implementation of this research enables to fill the cognitive gap existing in the theory of management sciences.Keywords: competencies, competencies profile, lobbying, lobbyist
Procedia PDF Downloads 152