Search results for: Drosophila driver image
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3249

Search results for: Drosophila driver image

1869 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production

Authors: Divya Vats, Sanjay Mahajani

Abstract:

The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.

Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity

Procedia PDF Downloads 71
1868 Substitutional Inference in Poetry: Word Choice Substitutions Craft Multiple Meanings by Inference

Authors: J. Marie Hicks

Abstract:

The art of the poetic conjoins meaning and symbolism with imagery and rhythm. Perhaps the reader might read this opening sentence as 'The art of the poetic combines meaning and symbolism with imagery and rhythm,' which holds a similar message, but is not quite the same. The reader understands that these factors are combined in this literary form, but to gain a sense of the conjoining of these factors, the reader is forced to consider that these aspects of poetry are not simply combined, but actually adjoin, abut, skirt, or touch in the poetic form. This alternative word choice is an example of substitutional inference. Poetry is, ostensibly, a literary form where language is used precisely or creatively to evoke specific images or emotions for the reader. Often, the reader can predict a coming rhyme or descriptive word choice in a poem, based on previous rhyming pattern or earlier imagery in the poem. However, there are instances when the poet uses an unexpected word choice to create multiple meanings and connections. In these cases, the reader is presented with an unusual phrase or image, requiring that they think about what that image is meant to suggest, and their mind also suggests the word they expected, creating a second, overlying image or meaning. This is what is meant by the term 'substitutional inference.' This is different than simply using a double entendre, a word or phrase that has two meanings, often one complementary and the other disparaging, or one that is innocuous and the other suggestive. In substitutional inference, the poet utilizes an unanticipated word that is either visually or phonetically similar to the expected word, provoking the reader to work to understand the poetic phrase as written, while unconsciously incorporating the meaning of the line as anticipated. In other words, by virtue of a word substitution, an inference of the logical word choice is imparted to the reader, while they are seeking to rationalize the word that was actually used. There is a substitutional inference of meaning created by the alternate word choice. For example, Louise Bogan, 4th Poet Laureate of the United States, used substitutional inference in the form of homonyms, malapropisms, and other unusual word choices in a number of her poems, lending depth and greater complexity, while actively engaging her readers intellectually with her poetry. Substitutional inference not only adds complexity to the potential interpretations of Bogan’s poetry, as well as the poetry of others, but provided a method for writers to infuse additional meanings into their work, thus expressing more information in a compact format. Additionally, this nuancing enriches the poetic experience for the reader, who can enjoy the poem superficially as written, or on a deeper level exploring gradations of meaning.

Keywords: poetic inference, poetic word play, substitutional inference, word substitution

Procedia PDF Downloads 238
1867 Numerical Investigation of the Effect of Blast Pressure on Discrete Model in Shock Tube

Authors: Aldin Justin Sundararaj, Austin Lord Tennyson, Divya Jose, A. N. Subash

Abstract:

Blast waves are generated due to the explosions of high energy materials. An explosion yielding a blast wave has the potential to cause severe damage to buildings and its personnel. In order to understand the physics of effects of blast pressure on buildings, studies in the shock tube on generic configurations are carried out at various pressures on discrete models. The strength of shock wave is systematically varied by using different driver gases and diaphragm thickness. The basic material of the diaphragm is Aluminum. To simulate the effect of shock waves on discrete models a shock tube was used. Generic models selected for this study are suitably scaled cylinder, cone and cubical blocks. The experiments were carried out with 2mm diaphragm with burst pressure ranging from 28 to 31 bar. Numerical analysis was carried out over these discrete models. A 3D model of shock-tube with different discrete models inside the tube was used for CFD computation. It was found that cone has dissipated most of the shock pressure compared to cylinder and cubical block. The robustness and the accuracy of the numerical model were validation with the analytical and experimental data.

Keywords: shock wave, blast wave, discrete models, shock tube

Procedia PDF Downloads 331
1866 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques

Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri

Abstract:

Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.

Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology

Procedia PDF Downloads 155
1865 From Parents to Pioneers: Examining Parental Impact on Entrepreneurial Traits in Latin America

Authors: Bert Seither

Abstract:

Entrepreneurship is a critical driver of economic growth, especially in emerging regions such as Latin America. This study investigates how parental influences, particularly parental individual entrepreneurial orientation (IEO), shape the entrepreneurial traits of Latin American entrepreneurs. By examining key factors like parental IEO, work ethic, parenting style, and family support, this research assesses how much of an entrepreneur's own IEO can be attributed to parental influence. The study also explores how socio-economic status and cultural context moderate the relationship between parental traits and entrepreneurial orientation. Data will be collected from 600 Latin American entrepreneurs via an online survey. This research aims to provide a comprehensive understanding of the intergenerational transmission of entrepreneurial traits and the broader socio-cultural factors that contribute to entrepreneurial success in diverse contexts. Findings from this study will offer valuable insights for policymakers, educators, and business leaders on fostering entrepreneurship across Latin America, providing practical applications for shaping entrepreneurial behavior through family influences.

Keywords: entrepreneurial orientation, parental influence, Latin American entrepreneurs, socio-economic status, cultural context

Procedia PDF Downloads 18
1864 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit

Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana

Abstract:

Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.

Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification

Procedia PDF Downloads 155
1863 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection

Procedia PDF Downloads 135
1862 Determination of ILSS of Composite Materials Using Micromechanical FEA Analysis

Authors: K. Rana, H.A.Saeed, S. Zahir

Abstract:

Inter Laminar Shear Stress (ILSS) is a main key parameter which quantify the properties of composite materials. These properties can ascertain the use of material for a specific purpose like aerospace, automotive etc. A modelling approach for determination of ILSS is presented in this paper. Geometric modelling of composite material is performed in TEXGEN software where reinforcement, cured matrix and their interfaces are modelled separately as per actual geometry. Mechanical properties of matrix and reinforcements are modelled separately which incorporated anisotropy in the real world composite material. ASTM D2344 is modelled in ANSYS for ILSS. In macroscopic analysis model approximates the anisotropy of the material and uses orthotropic properties by applying homogenization techniques. Shear Stress analysis in that case does not show the actual real world scenario and rather approximates it. In this paper actual geometry and properties of reinforcement and matrix are modelled to capture the actual stress state during the testing of samples as per ASTM standards. Testing of samples is also performed in order to validate the results. Fibre volume fraction of yarn is determined by image analysis of manufactured samples. Fibre volume fraction data is incorporated into the numerical model for correction of transversely isotropic properties of yarn. A comparison between experimental and simulated results is presented.

Keywords: ILSS, FEA, micromechanical, fibre volume fraction, image analysis

Procedia PDF Downloads 373
1861 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network

Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal

Abstract:

This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.

Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography

Procedia PDF Downloads 144
1860 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 162
1859 The Influence of Culture on Manifestations of Animus

Authors: Anahit Khananyan

Abstract:

The results of the long-term Jungian analysis with female clients from Eastern and Asian countries, which belong to collectivist cultures, are summarised in the article. The goal of the paper is to describe the cultural complex, which was found by the author in the analysis of women of collectivistic culture. It was named “the repression of Animus”. Generally, C.G.Jung himself and the Post-Jungians studied conditions caused by the possession by Animus. The conditions and cases of the repressed Animus, depending on the type of culture and cultural complexes, as we know, were not widely disseminated. C.G. Jung discovered and recognized the Animus as the second component of a pair of opposites of the psyche of women – femininity and Animus. In the way of individuation, an awareness of manifestations of Animus plays an important role: understanding the differences between negative and positive Animus as well as the Animus and the Shadow, then standing the tension of the presence of a pair of opposites - femininity and Animus, acceptance of the tension of them, finding the balance between them and reconciliation of this opposites. All of the above are steps towards the realization of the Animus, its release Animua, and the healing of the psyche. In the paper, the author will share her experience of analyzing the women of different collectivist cultures and her experience of recognizing the repressed Animus during the analysis. Also, she will describe some peculiarities of upbringing and cultural traditions, which reflected the cultural complex of repression of Animus. This complex is manifested in the traditions of girls' upbringing in accordance with which an image of a woman with overly developed femininity and an absence or underdeveloped Animus is idealized and encouraged as well as an evaluating attitude towards females who have to correspond to this image and fulfill the role prescribed in this way in the family and society.

Keywords: analysis, cultural complex, animus, manifestation, culture

Procedia PDF Downloads 84
1858 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study

Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb

Abstract:

The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.

Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose

Procedia PDF Downloads 221
1857 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph

Procedia PDF Downloads 306
1856 Soil Salinity from Wastewater Irrigation in Urban Greenery

Authors: H. Nouri, S. Chavoshi Borujeni, S. Anderson, S. Beecham, P. Sutton

Abstract:

The potential risk of salt leaching through wastewater irrigation is of concern for most local governments and city councils. Despite the necessity of salinity monitoring and management in urban greenery, most attention has been on agricultural fields. This study was defined to investigate the capability and feasibility of monitoring and predicting soil salinity using near sensing and remote sensing approaches using EM38 surveys, and high-resolution multispectral image of WorldView3. Veale Gardens within the Adelaide Parklands was selected as the experimental site. The results of the near sensing investigation were validated by testing soil salinity samples in the laboratory. Over 30 band combinations forming salinity indices were tested using image processing techniques. The outcomes of the remote sensing and near sensing approaches were compared to examine whether remotely sensed salinity indicators could map and predict the spatial variation of soil salinity through a potential statistical model. Statistical analysis was undertaken using the Stata 13 statistical package on over 52,000 points. Several regression models were fitted to the data, and the mixed effect modelling was selected the most appropriate one as it takes to account the systematic observation-specific unobserved heterogeneity. Results showed that SAVI (Soil Adjusted Vegetation Index) was the only salinity index that could be considered as a predictor for soil salinity but further investigation is needed. However, near sensing was found as a rapid, practical and realistically accurate approach for salinity mapping of heterogeneous urban vegetation.

Keywords: WorldView3, remote sensing, EM38, near sensing, urban green spaces, green smart cities

Procedia PDF Downloads 162
1855 Automotive Emotions: An Investigation of Their Natures, Frequencies of Occurrence and Causes

Authors: Marlene Weber, Joseph Giacomin, Alessio Malizia, Lee Skrypchuk, Voula Gkatzidou

Abstract:

Technological and sociological developments in the automotive sector are shifting the focus of design towards developing a better understanding of driver needs, desires and emotions. Human centred design methods are being more frequently applied to automotive research, including the use of systems to detect human emotions in real-time. One method for a non-contact measurement of emotion with low intrusiveness is Facial-Expression Analysis (FEA). This paper describes a research study investigating emotional responses of 22 participants in a naturalistic driving environment by applying a multi-method approach. The research explored the possibility to investigate emotional responses and their frequencies during naturalistic driving through real-time FEA. Observational analysis was conducted to assign causes to the collected emotional responses. In total, 730 emotional responses were measured in the collective study time of 440 minutes. Causes were assigned to 92% of the measured emotional responses. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified.

Keywords: affective computing, case study, emotion recognition, human computer interaction

Procedia PDF Downloads 203
1854 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 350
1853 Design and Implement a Remote Control Robot Controlled by Zigbee Wireless Network

Authors: Sinan Alsaadi, Mustafa Merdan

Abstract:

Communication and access systems can be made with many methods in today’s world. These systems are such standards as Wifi, Wimax, Bluetooth, GPS and GPRS. Devices which use these standards also use system resources excessively in direct proportion to their transmission speed. However, large-scale data communication is not always needed. In such cases, a technology which will use system resources as little as possible and support smart network topologies has been needed in order to enable the transmissions of such small packet data and provide the control for this kind of devices. IEEE issued 802.15.4 standard upon this necessity and enabled the production of Zigbee protocol which takes these standards as its basis and devices which support this protocol. In our project, this communication protocol was preferred. The aim of this study is to provide the immediate data transmission of our robot from the field within the scope of the project. In addition, making the communication with the robot through Zigbee Protocol has also been aimed. While sitting on the computer, obtaining the desired data from the region where the robot is located has been taken as the basis. Arduino Uno R3 microcontroller which provides the control mechanism, 1298 shield as the motor driver.

Keywords: ZigBee, wireless network, remote monitoring, smart home, agricultural industry

Procedia PDF Downloads 278
1852 American Slang: Perception and Connotations – Issues of Translation

Authors: Lison Carlier

Abstract:

The English language that is taught in school or used in media nowadays is defined as 'standard English,' although unstandardized Englishes, or 'parallel' Englishes, are practiced throughout the world. The existence of these 'parallel' Englishes has challenged standardization by imposing its own specific vocabulary or grammar. These non-standard languages tend to be regarded as inferior and, therefore, pose a problem regarding their translation. In the USA, 'slanguage', or slang, is a good example of a 'parallel' language. It consists of a particular set of vocabulary, used mostly in speech, and rarely in writing. Qualified as vulgar, often reduced to an urban language spoken by young people from lower classes, slanguage – or the language that is often first spoken between youths – is still the most common language used in the English-speaking world. Moreover, it appears that the prime meaning of 'informal' (as in an informal language) – a language that is spoken with persons the speaker knows – has been put aside and replaced in the general mind by the idea of vulgarity and non-appropriateness, when in fact informality is a sign of intimacy, not of vulgarity. When it comes to translating American slang, the main problem a translator encounters is the image and the cultural background usually associated with this 'parallel' language. Indeed, one will have, unwillingly, a predisposition to categorize a speaker of a 'parallel' language as being part of a particular group of people. The way one sees a speaker using it is paramount, and needs to be transposed into the target language. This paper will conduct an analysis of American slang – its use, perception and the image it gives of its speakers – and its translation into French, using the novel Is Everyone Hanging Out Without Me? (and other concerns) by way of example. In her autobiography/personal essay book, comedy writer, actress and author Mindy Kaling speaks with a very familiar English, including slang, which participates in the construction of her own voice and style, and enables a deeper connection with her readers.

Keywords: translation, English, slang, French

Procedia PDF Downloads 318
1851 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 159
1850 Decoupling PM₂.₅ Emissions and Economic Growth in China over 1998-2016: A Regional Investment Perspective

Authors: Xi Zhang, Yong Geng

Abstract:

It is crucial to decouple economic growth from environmental pollution in China. This study aims to evaluate the decoupling degree between PM₂.₅ emissions and economic growth in China from a regional investment perspective. Using the panel data of 30 Chinese provinces for the period of 1998-2016, this study combines decomposition analysis with decoupling analysis to identify the roles of conventional factors and three novel investment factors in the mitigation and decoupling of PM₂.₅ emissions in China and its four sub-regions. The results show that China’s PM₂.₅ emissions were weakly decoupled to economic growth during the period of 1998-2016, as well as in China’s four sub-regions. At the national level, investment scale played the dominant role while investment structure had a marginal effect. In contrast, emission intensity was the largest driver in promoting the decoupling effect, followed by investment efficiency and energy intensity. The investment scale effect in the western region far exceeded those in other three sub-regions. At the provincial level, the investment structure of Inner Mongolia and investment scales of Xinjiang and Inner Mongolia had the greatest impacts on PM₂.₅ emission growth. Finally, several policy recommendations are raised for China to mitigate its PM₂.₅ emissions.

Keywords: decoupling, economic growth, investment, PM₂.₅ emissions

Procedia PDF Downloads 119
1849 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling

Authors: Erfan Niazi, Marianne Fenech

Abstract:

Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.

Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling

Procedia PDF Downloads 355
1848 Brand Identity Creation for Thai Halal Brands

Authors: Pibool Waijittragum

Abstract:

The purpose of this paper is to synthesize the research result of brand Identities of Thai Halal brands which related to the way of life for Thai Muslims. The results will be transforming to Thai Halal Brands packaging and label design. The expected benefit is an alternative of marketing strategy for brand building process for Halal products in Thailand. Four elements of marketing strategies which necessary for the brand identity creation is the research framework: consists of Attributes, Benefits, Values and Personality. The research methodology was applied using qualitative and quantitative; 19 marketing experts with dynamic roles in Thai consumer products were interviewed. In addition, a field survey of 122 Thai Muslims selected from 175 Muslim communities in Bangkok was studied. Data analysis will be according to 5 categories of Thai Halal product: 1) Meat 2) Vegetable and Fruits 3) Instant foods and Garnishing ingredient 4) Beverages, Desserts and Snacks 5) Hygienic daily products. The results will explain some suitable approach for brand Identities of Thai Halal brands as are: 1) Benefit approach as the characteristics of the product with its benefit. The brand identity created transform to the packaging design should be clear and display a fresh product 2) Value approach as the value of products that affect to consumers’ perception. The brand identity created transform to the packaging design should be simply look and using a trustful image 3) Personality approach as the reflection of consumers thought. The brand identity created transform to the packaging design should be sincere, enjoyable, merry, flamboyant look and using a humoristic image.

Keywords: marketing strategies, brand identity, packaging and label design, Thai Halal products

Procedia PDF Downloads 437
1847 Film Censorship and Female Chastity: Exploring State's Discourses and Patriarchal Values in Reconstructing Chinese Film Stardom of Tang Wei

Authors: Xinchen Zhu

Abstract:

The rapid fame of the renowned female film star Tang Wei has made her a typical subject (or object) entangled with sensitive issues involving the official ideology, sexuality, and patriarchal values of contemporary China. In 2008, Tang Wei’s official ban has triggered the wave of debates concerning state power and censorship, actor’s rights, sexual ethics, and feminism in the public sphere. Her ban implies that Chinese film censorship acts as a key factor in reconstructing Chinese film stardom. Following the ban, as sensational media texts are re-interpreting the official discourses, the texts also functioned as a crucial vehicle in reconstructing Tang's female image. Therefore, the case study of Tang's film stardom allows us to further explore how female stardom has been entangled with the issues involving official ideology, female sexual ethics, and patriarchal values in contemporary China. This paper argues that Chinese female film stars shoulder the responsibility of film acting which would conform to the official male-dominated values. However, with the development of the Internet, the state no longer remains an absolute control over the new venues. The netizens’ discussion about her ban reshaped Tang’s image as a victim and scapegoat under the unfair oppression of the official authority. Additionally, this paper argues that similar to State’s discourse, netizens’ discourse did not reject patriarchal values, and in turn emphasized Tang Wei’s female chastity.

Keywords: film censorship, Chinese female film stardom, party-state’s power, national discourses, Tang Wei

Procedia PDF Downloads 170
1846 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 295
1845 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 88
1844 A Compact Standing-Wave Thermoacoustic Refrigerator Driven by a Rotary Drive Mechanism

Authors: Kareem Abdelwahed, Ahmed Salama, Ahmed Rabie, Ahmed Hamdy, Waleed Abdelfattah, Ahmed Abd El-Rahman

Abstract:

Conventional vapor-compression refrigeration systems rely on typical refrigerants, such as CFC, HCFC and ammonia. Despite of their suitable thermodynamic properties and their stability in the atmosphere, their corresponding global warming potential and ozone depletion potential raise concerns about their usage. Thus, the need for new refrigeration systems, which are environment-friendly, inexpensive and simple in construction, has strongly motivated the development of thermoacoustic energy conversion systems. A thermoacoustic refrigerator (TAR) is a device that is mainly consisting of a resonator, a stack and two heat exchangers. Typically, the resonator is a long circular tube, made of copper or steel and filled with Helium as a the working gas, while the stack has short and relatively low thermal conductivity ceramic parallel plates aligned with the direction of the prevailing resonant wave. Typically, the resonator of a standing-wave refrigerator has one end closed and is bounded by the acoustic driver at the other end enabling the propagation of half-wavelength acoustic excitation. The hot and cold heat exchangers are made of copper to allow for efficient heat transfer between the working gas and the external heat source and sink respectively. TARs are interesting because they have no moving parts, unlike conventional refrigerators, and almost no environmental impact exists as they rely on the conversion of acoustic and heat energies. Their fabrication process is rather simpler and sizes span wide variety of length scales. The viscous and thermal interactions between the stack plates, heat exchangers' plates and the working gas significantly affect the flow field within the plates' channels, and the energy flux density at the plates' surfaces, respectively. Here, the design, the manufacture and the testing of a compact refrigeration system that is based on the thermoacoustic energy-conversion technology is reported. A 1-D linear acoustic model is carefully and specifically developed, which is followed by building the hardware and testing procedures. The system consists of two harmonically-oscillating pistons driven by a simple 1-HP rotary drive mechanism operating at a frequency of 42Hz -hereby, replacing typical expensive linear motors and loudspeakers-, and a thermoacoustic stack within which the energy conversion of sound into heat is taken place. Air at ambient conditions is used as the working gas while the amplitude of the driver's displacement reaches 19 mm. The 30-cm-long stack is a simple porous ceramic material having 100 square channels per square inch. During operation, both oscillating-gas pressure and solid-stack temperature are recorded for further analysis. Measurements show a maximum temperature difference of about 27 degrees between the stack hot and cold ends with a Carnot coefficient of performance of 11 and estimated cooling capacity of five Watts, when operating at ambient conditions. A dynamic pressure of 7-kPa-amplitude is recorded, yielding a drive ratio of 7% approximately, and found in a good agreement with theoretical prediction. The system behavior is clearly non-linear and significant non-linear loss mechanisms are evident. This work helps understanding the operation principles of thermoacoustic refrigerators and presents a keystone towards developing commercial thermoacoustic refrigerator units.

Keywords: refrigeration system, rotary drive mechanism, standing-wave, thermoacoustic refrigerator

Procedia PDF Downloads 368
1843 Implementation of a Monostatic Microwave Imaging System using a UWB Vivaldi Antenna

Authors: Babatunde Olatujoye, Binbin Yang

Abstract:

Microwave imaging is a portable, noninvasive, and non-ionizing imaging technique that employs low-power microwave signals to reveal objects in the microwave frequency range. This technique has immense potential for adoption in commercial and scientific applications such as security scanning, material characterization, and nondestructive testing. This work presents a monostatic microwave imaging setup using an Ultra-Wideband (UWB), low-cost, miniaturized Vivaldi antenna with a bandwidth of 1 – 6 GHz. The backscattered signals (S-parameters) of the Vivaldi antenna used for scanning targets were measured in the lab using a VNA. An automated two-dimensional (2-D) scanner was employed for the 2-D movement of the transceiver to collect the measured scattering data from different positions. The targets consist of four metallic objects, each with a distinct shape. Similar setup was also simulated in Ansys HFSS. A high-resolution Back Propagation Algorithm (BPA) was applied to both the simulated and experimental backscattered signals. The BPA utilizes the phase and amplitude information recorded over a two-dimensional aperture of 50 cm × 50 cm with a discreet step size of 2 cm to reconstruct a focused image of the targets. The adoption of BPA was demonstrated by coherently resolving and reconstructing reflection signals from conventional time-of-flight profiles. For both the simulation and experimental data, BPA accurately reconstructed a high resolution 2D image of the targets in terms of shape and location. An improvement of the BPA, in terms of target resolution, was achieved by applying the filtering method in frequency domain.

Keywords: back propagation, microwave imaging, monostatic, vivialdi antenna, ultra wideband

Procedia PDF Downloads 19
1842 Experimental Study of Flow Characteristics for a Cylinder with Respect to Attached Flexible Strip Body of Various Reynolds Number

Authors: S. Teksin, S. Yayla

Abstract:

The aim of the present study was to investigate details of flow structure in downstream of a circular cylinder base mounted on a flat surface in a rectangular duct with the dimensions of 8000 x 1000 x 750 mm in deep water flow for the Reynolds number 2500, 5000 and 7500. A flexible strip was attached to behind the cylinder and compared the bare body. Also, it was analyzed that how boundary layer affects the structure of flow around the cylinder. Diameter of the cylinder was 60 mm and the length of the flexible splitter plate which had a certain modulus of elasticity was 150 mm (L/D=2.5). Time-averaged velocity vectors, vortex contours, streamwise and transverse velocity components were investigated via Particle Image Velocimetry (PIV). Velocity vectors and vortex contours were displayed through the sections in which boundary layer effect was not present. On the other hand, streamwise and transverse velocity components were monitored for both cases, i.e. with and without boundary layer effect. Experiment results showed that the vortex formation occured in a larger area for L/D=2.5 and the point where the vortex was maximum from the base of the cylinder was shifted. Streamwise and transverse velocity component contours were symmetrical with reference to the center of the cylinder for all cases. All Froud numbers based on the Reynolds numbers were quite smaller than 1. The flow characteristics of velocity component values of attached circular cylinder arrangement decreased approximately twenty five percent comparing to bare cylinder case.

Keywords: partical image velocimetry, elastic plate, cylinder, flow structure

Procedia PDF Downloads 314
1841 Impact of Marketing towards Behavior Intention

Authors: Sathyamangalam Rangasamy Guru Prasath

Abstract:

Due to the increasing homogeneity in product offerings, the attendant services provided are emerging as a key differentiator in the mind of the consumers. Services marketing are a sub field of marketing which covers the marketing of both goods and services. Service marketing differs from product marketing due to the face that services are intangible and typically require personal interaction with the customer. Relationships are a key factor when it comes to the marketing of services. The role of interpersonal relationships distinguishes service and product marketing in strategic vision and organizational considerations. This paper explores some of the trends in service marketing as they relate to strategic vision, operational and organizational changes, and marketing tactics. The presence of the customer in the service facility means that capacity management becomes an important driver of the firm’s profitability service marketing is a process from the organization’s point of view, but an experience from the customer’s perspective. The quality of the experience is a function of the careful design of customer service processes, adoption of standardized procedures, rigorous management of service quality, high standards of training and automation. Services marketing helps to ensure that these processes are designed from the customer’s perspective. Services marketing includes customer loyalty, managing relationships, complaint handling, improving service quality and productivity of service operations, and how to become a service leader in your industry.

Keywords: customer perspective, product marketing, service marketing, rigorous management

Procedia PDF Downloads 370
1840 Study of the Morpho-Sedimentary Evolution of Tidal Mouths on the Southern Fringe of the Gulf of Gabes, Southeast of Tunisia: Hydrodynamic Circulation and Associated Sedimentary Movements

Authors: Chadlia Ounissi, Maher Gzam, Tahani Hallek, Salah Mahmoudi, Mabrouk Montacer

Abstract:

This work consists of a morphological study of the coastal domain at the central fringe of the Gulf of Gabes, Southeast of Tunisia, belonging to the structural domain of the maritime Jeffara. The diachronic study of tidal mouths in the study area and the observation of morphological markers revealed the existence of hydro-sedimentary processes leading to sedimentary accumulation and filling of the estuarine system. This filling process is materialized by the genesis of a sandy cord and the lateral migration of the tidal mouth. Moreover, we have been able to affirm, by the use of satellite images, that the dominant and responsible current at this particular coastal morphology is directed to the North, having constituted a controversy on the occurrence of what is previously mentioned in the literature. The speed of the lateral displacement of the channel varies as a function of the hydrodynamic forcing. Wave-dominated sites recorded the fastest speed (18 m/year) in the image of the mouth of Wadi el Melah. Tidal dominated sites in the Wadi Zerkine satellite image recorded a very low lateral migration (2 m / year). This variation in speed indicates that the intensity of the coastal current is uneven along the coast. This general pattern of hydrodynamic circulation, to the north, of the central fringe of the Gulf of Gabes, is disturbed by hydro-sedimentary cells.

Keywords: tidal mouth, direction of current, filling, sediment transport, Gulf of Gabes

Procedia PDF Downloads 284