Search results for: RLS identification algorithm
961 Conversational Assistive Technology of Visually Impaired Person for Social Interaction
Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer
Abstract:
Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.Keywords: dataset, visually impaired person, natural language process, human activity recognition
Procedia PDF Downloads 58960 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)
Procedia PDF Downloads 602959 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima
Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez
Abstract:
Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis
Procedia PDF Downloads 322958 Groundwater Flow Dynamics in Shallow Coastal Plain Sands Aquifer, Abesan Area, Eastern Dahomey Basin, Southwestern Nigeria
Authors: Anne Joseph, Yinusa Asiwaju-Bello, Oluwaseun Olabode
Abstract:
Sustainable administration of groundwater resources tapped in Coastal Plain Sands aquifer in Abesan area, Eastern Dahomey Basin, Southwestern Nigeria necessitates the knowledge of the pattern of groundwater flow in meeting a suitable environmental need for habitation. Thirty hand-dug wells were identified and evaluated to study the groundwater flow dynamics and anionic species distribution in the study area. Topography and water table levels method with the aid of Surfer were adopted in the identification of recharge and discharge zones where six recharge and discharge zones were delineated correspondingly. Dissolved anionic species of HCO3-, Cl-, SO42-and NO3- were determined using titrimetric and spectrophotometric method. The trend of significant anionic concentrations of groundwater samples are in the order Cl- > HCO3-> SO42- > NO3-. The prominent anions in the discharge and recharge area are Cl- and HCO3- ranging from 0.22ppm to 3.67ppm and 2.59ppm to 0.72ppm respectively. Analysis of groundwater head distribution and the groundwater flow vector in Abesan area confirmed that Cl- concentration is higher than HCO3- concentration in recharge zones. Conversely, there is a high concentration of HCO3- than Cl- inland towards the continent; therefore, HCO3-concentration in the discharge zones is higher than the Cl- concentration. The anions were to be closely related to the recharge and discharge areas which were confirmed by comparison of activities such as rainfall regime and anthropogenic activities in Abesan area. A large percentage of the samples showed that HCO3-, Cl-, SO42-and NO3- falls within the permissible limit of the W.H.O standard. Most of the samples revealed Cl- / (CO3- + HCO3-) ratio higher than 0.5 indicating that there is saltwater intrusion imprints in the groundwater of the study area. Gibbs plot shown that most of the samples is from rock dominance, some from evaporation dominance and few from precipitation dominance. Potential salinity and SO42/ Cl- ratios signifies that most of the groundwater in Abesan is saline and falls in a water class found to be insuitable for irrigation. Continuous dissolution of these anionic species may pose a significant threat to the inhabitants of Abesan area in the nearest future.Keywords: Abessan, Anionic species, Discharge, Groundwater flow, Recharge
Procedia PDF Downloads 124957 Evaluation of the Impact of Functional Communication Training on Behaviors of Concern for Students at a Non-Maintained Special School
Authors: Kate Duggan
Abstract:
Introduction: Functional Communication Training (FCT) is an approach which aims to reduce behaviours of concern by teaching more effective ways to communicate. It requires identification of the function of the behaviour of concern, through gathering information from key stakeholders and completing observations of the individual’s behaviour including antecedents to, and consequences of the behaviour. Appropriate communicative alternatives are then identified and taught to the individual using systematic instruction techniques. Behaviours of concern demonstrated by individuals with autism spectrum conditions (ASC) frequently have a communication function. When contributing to positive behavior support plans, speech and language therapists and other professionals working with individuals with ASC need to identify alternative communicative behaviours which are equally reinforcing as the existing behaviours of concern. Successful implementation of FCT is dependent on an effective ‘response match’. The new way of communicating must be equally as effective as the behaviour previously used and require the same amount or less effort from the individual. It must also be understood by the communication partners the individual encounters and be appropriate to their communicative contexts. Method: Four case studies within a non-maintained special school environment were described and analysed. A response match framework was used to identify the effectiveness of functional communication training delivered by the student’s speech and language therapist, teacher and learning support assistants. The success of systematic instruction techniques used to develop new communicative behaviours was evaluated using the CODES framework. Findings: Functional communication training can be used as part of a positive behaviour support approach for students within this setting. All case studies reviewed demonstrated ‘response success’, in that the desired response was gained from the new communicative behaviour. Barriers to the successful embedding of new communicative behaviours were encountered. In some instances, the new communicative behaviour could not be consistently understood across all communication partners which reduced ‘response recognisability’. There was also evidence of increased physical or cognitive difficulty in employing the new communicative behaviour which reduced the ‘response effectivity’. Successful use of ‘thinning schedules of reinforcement’, taught students to tolerate a delay to reinforcement once the new communication behaviour was learned.Keywords: augmentative and alternative communication, autism spectrum conditions, behaviours of concern, functional communication training
Procedia PDF Downloads 117956 Context-Aware Point-Of-Interests Recommender Systems Using Integrated Sentiment and Network Analysis
Authors: Ho Yeon Park, Kyoung-Jae Kim
Abstract:
Recently, user’s interests for location-based social network service increases according to the advances of social web and location-based technologies. It may be easy to recommend preferred items if we can use user’s preference, context and social network information simultaneously. In this study, we propose context-aware POI (point-of-interests) recommender systems using location-based network analysis and sentiment analysis which consider context, social network information and implicit user’s preference score. We propose a context-aware POI recommendation system consisting of three sub-modules and an integrated recommendation system of them. First, we will develop a recommendation module based on network analysis. This module combines social network analysis and cluster-indexing collaboration filtering. Next, this study develops a recommendation module using social singular value decomposition (SVD) and implicit SVD. In this research, we will develop a recommendation module that can recommend preference scores based on the frequency of POI visits of user in POI recommendation process by using social and implicit SVD which can reflect implicit feedback in collaborative filtering. We also develop a recommendation module using them that can estimate preference scores based on the recommendation. Finally, this study will propose a recommendation module using opinion mining and emotional analysis using data such as reviews of POIs extracted from location-based social networks. Finally, we will develop an integration algorithm that combines the results of the three recommendation modules proposed in this research. Experimental results show the usefulness of the proposed model in relation to the recommended performance.Keywords: sentiment analysis, network analysis, recommender systems, point-of-interests, business analytics
Procedia PDF Downloads 250955 Finding the Right Regulatory Path for Islamic Banking
Authors: Meysam Saidi
Abstract:
While the specific externalities and required regulatory measures in relation to Islamic banking are fairly uncertain, the business is growing across the world. Unofficial data indicate that the Islamic Finance market is growing with annual rate of 15% and it has reached 1.3 $ trillion size. This trend is associated with inherent systematic connection of Islamic financial institutions to other entities and different sectors of economies. Islamic banking has been subject of market development policies in major economies, most notably the UK. This trend highlights the need for identification of distinct risk features of Islamic banking and crafting customized regulatory measures. So far there has not been a significant systemic crisis in this market which can be attributed to its distinct nature. However, the significant growth and spread of its products worldwide necessitate an in depth study of its nature for customized congruent regulatory measures. In the post financial crisis era some market analysis and reports suggested that the Islamic banks fairly weathered the crisis. As far as heavily blamed conventional financial products such as subprime mortgage backed securities and speculative credit default swaps were concerned the immunity claim can be considered true, as Islamic financial institutions were not directly exposed to such products. Nevertheless, similar to the experience of the conventional banking industry, it can be only a matter of time for Islamic banks to face failures that can be specific to the nature of their business. Using the experience of conventional banking regulations and identifying those peculiarities of Islamic banking that need customized regulatory approach can aid to prevent major failures. Frank Knight has stated that “We perceive the world before we react to it, and we react not to what we perceive, but always to what we infer”. The debate over congruent Islamic banking regulations might not be an exception to Frank Knight’s statement but I will try to base my discussion on concrete evidences. This paper first analyzes both theoretical and actual features of Islamic banking in order to ascertain to its peculiarities in terms of market stability and other externalities. Next, the paper discusses distinct features of Islamic financial transactions and banking which might require customized regulatory measures. Finally, the paper explores how a more transparent path for the Islamic banking regulations can be drawn.Keywords: Islamic banking, regulation, risks, capital requirements, customer protection, financial stability
Procedia PDF Downloads 409954 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 361953 Assessment of Land Use Land Cover Change-Induced Climatic Effects
Authors: Mahesh K. Jat, Ankan Jana, Mahender Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) are used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: LULC, sensible heat flux, latent heat flux, SEBAL, landsat, precipitation, temperature
Procedia PDF Downloads 116952 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 86951 The Evolution of Traditional Rhythms in Redefining the West African Country of Guinea
Authors: Janice Haworth, Karamoko Camara, Marie-Therèse Dramou, Kokoly Haba, Daniel Léno, Augustin Mara, Adama Noël Oulari, Silafa Tolno, Noël Zoumanigui
Abstract:
The traditional rhythms of the West African country of Guinea have played a centuries-long role in defining the different people groups that make up the country. Throughout their history, before and since colonization by the French, the different ethnicities have used their traditional music as a distinct part of their historical identities. That is starting to change. Guinea is an impoverished nation created in the early twentieth-century with little regard for the history and cultures of the people who were included. The traditional rhythms of the different people groups and their heritages have remained. Fifteen individual traditional Guinean rhythms were chosen to represent popular rhythms from the four geographical regions of Guinea. Each rhythm was traced back to its native village and video recorded on-site by as many different local performing groups as could be located. The cyclical patterns rhythms were transcribed via a circular, spatial design and then copied into a box notation system where sounds happening at the same time could be studied. These rhythms were analyzed for their consistency-over-performance in a Fundamental Rhythm Pattern analysis so rhythms could be compared for how they are changing through different performances. The analysis showed that the traditional rhythm performances of the Middle and Forest Guinea regions were the most cohesive and showed the least evidence of change between performances. The role of music in each of these regions is both limited and focused. The Coastal and High Guinea regions have much in common historically through their ethnic history and modern-day trade connections, but the rhythm performances seem to be less consistent and demonstrate more changes in how they are performed today. In each of these regions the role and usage of music is much freer and wide-spread. In spite of advances being made as a country, different ethnic groups still frequently only respond and participate (dance and sing) to the music of their native ethnicity. There is some evidence that this self-imposed musical barrier is beginning to change and evolve, partially through the development of better roads, more access to electricity and technology, the nation-wide Ebola health crisis, and a growing self-identification as a unified nation.Keywords: cultural identity, Guinea, traditional rhythms, west Africa
Procedia PDF Downloads 391950 Rhythm-Reading Success Using Conversational Solfege
Authors: Kelly Jo Hollingsworth
Abstract:
Conversational Solfege, a research-based, 12-step music literacy instructional method using the sound-before-sight approach, was used to teach rhythm-reading to 128-second grade students at a public school in the southeastern United States. For each step, multiple scripted techniques are supplied to teach each skill. Unit one was the focus of this study, which is quarter note and barred eighth note rhythms. During regular weekly music instruction, students completed method steps one through five, which includes aural discrimination, decoding familiar and unfamiliar rhythm patterns, and improvising rhythmic phrases using quarter notes and barred eighth notes. Intact classes were randomly assigned to two treatment groups for teaching steps six through eight, which was the visual presentation and identification of quarter notes and barred eighth notes, visually presenting and decoding familiar patterns, and visually presenting and decoding unfamiliar patterns using said notation. For three weeks, students practiced steps six through eight during regular weekly music class. One group spent five-minutes of class time on steps six through eight technique work, while the other group spends ten-minutes of class time practicing the same techniques. A pretest and posttest were administered, and ANOVA results reveal both the five-minute (p < .001) and ten-minute group (p < .001) reached statistical significance suggesting Conversational Solfege is an efficient, effective approach to teach rhythm-reading to second grade students. After two weeks of no instruction, students were retested to measure retention. Using a repeated-measures ANOVA, both groups reached statistical significance (p < .001) on the second posttest, suggesting both the five-minute and ten-minute group retained rhythm-reading skill after two weeks of no instruction. Statistical significance was not reached between groups (p=.252), suggesting five-minutes is equally as effective as ten-minutes of rhythm-reading practice using Conversational Solfege techniques. Future research includes replicating the study with other grades and units in the text.Keywords: conversational solfege, length of instructional time, rhythm-reading, rhythm instruction
Procedia PDF Downloads 157949 Linguistic World Order in the 21st Century: Need of Alternative Linguistics
Authors: Shailendra Kumar Singh
Abstract:
In the 21st century, we are living through extraordinary times as we are linguistically blessed to live through an era in which the each sociolinguistic example of living appears to be refreshingly new without any precedence of the past. The word `New Linguistic World Order’ is no longer just the intangible fascination but an indication of the emerging reality that we are living through a time in which the word ‘linguistic purism’ no longer invokes the sense of self categorization and self identification. The contemporary world of today is linguistically rewarding. This is a time in which the very existence of global, powerful and local needs to be revisited in the context of power shift, demographic shift, social psychological shift and technological shift. Hence, the old linguistic world view has to be challenged in the midst of 21st century. The first years of the 21st century have thus far been marked by the rise global economy, technological revolution and demographic shift, now we are witnessing linguistic shift which is leading towards forming a new linguistic world order. On the other hand, with rising powers of China and India in Asia in tandem the notion of alternative west is set to become a lot more interesting linguistically. It comes at a point when the world is moving towards inclusive globalization due to vanishing power corridor of the west and ascending geopolitical impact of emerging superpower and superpower in waiting. Now it is a reality that the western world no longer continues to rise – in fact, it will have more pressure to act in situation when the alternative west is looking for balanced globalization. It is more than likely that demographically strong languages of alternative west will be in advantageous position. The paper challenges our preconceptions about the nature of sociolinguistic nature of world in the 21st century. It investigates what a linguistic world is likely to be in the future in contrast to what was a linguistic world before 21st century. In particular, the paper tries to answer the following questions: (a) What will be the common linguistic thread across world? (b) How unprecedented transformations can be mapped linguistically? (c) Do we need alternative linguistics to define inclusive globalization as the linguistic reality of the contemporary world has already been reshaped by increasingly integrated world economy, linguistic revolution and alternative west? (d) In which ways these issues can be addressed holistically? (e) Why linguistic world order is changing dramatically? (f) Is it true that the linguistic world around is changing faster than we can even really cope? (g) Is it true that what is coming next is linguistically greater than ever? (h) Do we need to prepare ourselves with new theoretical strategies to address emerging sociolinguistic reality?Keywords: alternative linguistics, new linguistic world order, power shift, demographic shift, social psychological shift, technological shift
Procedia PDF Downloads 337948 Identification of Accumulated Hydrocarbon Based on Heat Propagation Analysis in Order to Develop Mature Field: Case Study in South Sumatra Basin, Indonesia
Authors: Kukuh Suprayogi, Muhamad Natsir, Olif Kurniawan, Hot Parulian, Bayu Fitriana, Fery Mustofa
Abstract:
The new approach by utilizing the heat propagation analysis carried out by studying and evaluating the effect of the presence of hydrocarbons to the flow of heat that goes from the bottom surface to surface. Heat propagation is determined by the thermal conductivity of rocks. The thermal conductivity of rock itself is a quantity that describes the ability of a rock to deliver heat. This quantity depends on the constituent rock lithology, large porosity, and pore fluid filler. The higher the thermal conductivity of a rock, the more easily the flow of heat passing through these rocks. With the same sense, the heat flow will more easily pass through the rock when the rock is filled with water than hydrocarbons, given the nature of the hydrocarbons having more insulator against heat. The main objective of this research is to try to make the model the heat propagation calculations in degrees Celsius from the subsurface to the surface which is then compared with the surface temperature is measured directly at the point of location. In calculating the propagation of heat, we need to first determine the thermal conductivity of rocks, where the rocks at the point calculation are not composed of homogeneous but consist of strata. Therefore, we need to determine the mineral constituent and porosity values of each stratum. As for the parameters of pore fluid filler, we assume that all the pores filled with water. Once we get a thermal conductivity value of each unit of the rock, then we begin to model the propagation of heat profile from the bottom to the surface. The initial value of the temperature that we use comes from the data bottom hole temperature (BHT) is obtained from drilling results. Results of calculations per depths the temperature is displayed in plotting temperature versus depth profiles that describe the propagation of heat from the bottom of the well to the surface, note that pore fluid is water. In the technical implementation, we can identify the magnitude of the effect of hydrocarbons in reducing the amount of heat that crept to the surface based on the calculation of propagation of heat at a certain point and compared with measurements of surface temperature at that point, assuming that the surface temperature measured is the temperature that comes from the asthenosphere. This publication proves that the accumulation of hydrocarbon can be identified by analysis of heat propagation profile which could be a method for identifying the presence of hydrocarbons.Keywords: thermal conductivity, rock, pore fluid, heat propagation
Procedia PDF Downloads 108947 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom
Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr
Abstract:
The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation
Procedia PDF Downloads 271946 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis
Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei
Abstract:
Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE
Procedia PDF Downloads 340945 Psychosocial Strategies Used by Individuals with Schizophrenia: An Analysis of Internet Forum Posts
Authors: Charisse H. Tay
Abstract:
Background: Schizophrenia is a severe chronic mental disorder that can result in hallucinations, delusions, reduced social engagement, and lack of motivation. While antipsychotic medications often provide the basis for treatment, psychosocial strategies complement the benefit of medications and can result in meaningful improvements in symptoms and functioning. The aim of the study was to investigate psychosocial strategies used by internet self-help forum participants to effectively manage symptoms caused by schizophrenia. Internet self-help forums are a resource for medical and psychological problems and are commonly used to share information about experiences with symptom management. Method: Three international self-help internet forums on schizophrenia were identified using a search engine. 1,181 threads regarding non-pharmacological, psychosocial self-management of schizophrenia symptoms underwent screening, resulting in the final identification and coding of 91 threads and 191 posts from 134 unique forum users that contained details on psychosocial strategies endorsed personally by users that allowed them to effectively manage symptoms of schizophrenia, including positive symptoms (e.g., auditory/visual/tactile hallucinations, delusions, paranoia), negative symptoms (e.g.., avolition, apathy, anhedonia), symptoms of distress, and cognitive symptoms (e.g., memory loss). Results: Effective symptom management strategies personally endorsed by online forum users were psychological skills (e.g., re-focusing, mindfulness/meditation, reality checking; n = 94), engaging in activities (e.g., exercise, working/volunteering, hobbies; n = 84), social/familial support (n = 48), psychotherapy (n = 33), diet (n = 18), and religion/spirituality (n = 14). 44.4% of users reported using more than one strategy to manage their symptoms. The most common symptoms targeted and effectively managed, as specified by users, were positive symptoms (n = 113), negative symptoms (n = 17), distress (n = 8), and memory loss (n = 6). 10.5% of users reported more than one symptom effectively targeted. 70.2% of users with positive symptoms reported that psychological skills were effective for symptom relief. 88% of users with negative symptoms and 75% with distress symptoms reported that engaging in activities was effective. Discussion: Individuals with schizophrenia rely on a variety of different psychosocial methods to manage their symptoms. Different symptomology appears to be more effectively targeted by different types of psychosocial strategies. This may help to inform treatment strategy and tailored for individuals with schizophrenia.Keywords: psychosocial treatment, qualitative methods, schizophrenia, symptom management
Procedia PDF Downloads 124944 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications
Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo
Abstract:
Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer
Procedia PDF Downloads 25943 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir
Procedia PDF Downloads 154942 ESDN Expression in the Tumor Microenvironment Coordinates Melanoma Progression
Authors: Roberto Coppo, Francesca Orso, Daniela Dettori, Elena Quaglino, Lei Nie, Mehran M. Sadeghi, Daniela Taverna
Abstract:
Malignant melanoma is currently the fifth most common cancer in the white population and it is fatal in its metastatic stage. Several research studies in recent years have provided evidence that cancer initiation and progression are driven by genetic alterations of the tumor and paracrine interactions between tumor and microenvironment. Scattered data show that the Endothelial and Smooth muscle cell-Derived Neuropilin-like molecule (ESDN) controls cell proliferation and movement of stroma and tumor cells. To investigate the role of ESDN in the tumor microenvironment during melanoma progression, murine melanoma cells (B16 or B16-F10) were injected in ESDN knockout mice in order to evaluate how the absence of ESDN in stromal cells could influence melanoma progression. While no effect was found on primary tumor growth, increased cell extravasation and lung metastasis formation was observed in ESDN knockout mice compared to wild type controls. In order to understand how cancer cells cross the endothelial barrier during metastatic dissemination in an ESDN-null microenvironment, structure, and permeability of lung blood vessels were analyzed. Interestingly, ESDN knockout mice showed structurally altered and more permeable vessels compared to wild type animals. Since cell surface molecules mediate the process of tumor cell extravasation, the expression of a panel of extravasation-related ligands and receptors was analyzed. Importantly, modulations of N-cadherin, E-selectin, ICAM-1 and VAP-1 were observed in ESDN knockout endothelial cells, suggesting the presence of a favorable tumor microenvironment which facilitates melanoma cell extravasation and metastasis formation in the absence of ESDN. Furthermore, a potential contribution of immune cells in tumor dissemination was investigated. An increased recruitment of macrophages in the lungs of ESDN knockout mice carrying subcutaneous B16-F10 tumors was found. In conclusion, our data suggest a functional role of ESDN in the tumor microenvironment during melanoma progression and the identification of the mechanisms that regulate tumor cell extravasation could lead to the development of new therapies to reduce metastasis formation.Keywords: melanoma, tumor microenvironment, extravasation, cell surface molecules
Procedia PDF Downloads 334941 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 12940 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 150939 Examination of How Do Smart Watches Influence the Market of Luxury Watches with Particular Regard of the Buying-Reasons
Authors: Christopher Benedikt Jakob
Abstract:
In our current society, there is no need to take a look at the wristwatch to know the exact time. Smartphones, the watch in the car or the computer watch, inform us about the time too. Over hundreds of years, luxury watches have held a fascination for human beings. Consumers buy watches that cost thousands of euros, although they could buy much cheaper watches which also fulfill the function to indicate the correct time. This shows that the functional value has got a minor meaning with reference to the buying-reasons as regards luxury watches. For a few years, people have an increased demand to track data like their walking distance per day or to track their sleep for example. Smart watches enable consumers to get information about these data. There exists a trend that people intend to optimise parts of their social life, and thus they get the impression that they are able to optimise themselves as human beings. With the help of smart watches, they are able to optimise parts of their productivity and to realise their targets at the same time. These smart watches are also offered as luxury models, and the question is: how will customers of traditional luxury watches react? Therefore this study has the intention to give answers to the question why people are willing to spend an enormous amount of money on the consumption of luxury watches. The self-expression model, the relationship basis model, the functional benefit representation model and the means-end-theory are chosen as an appropriate methodology to find reasons why human beings purchase specific luxury watches and luxury smart watches. This evaluative approach further discusses these strategies concerning for example if consumers buy luxury watches/smart watches to express the current self or the ideal self and if human beings make decisions on expected results. The research critically evaluates that relationships are compared on the basis of their advantages. Luxury brands offer socio-emotional advantages like social functions of identification and that the strong brand personality of luxury watches and luxury smart watches helps customers to structure and retrieve brand awareness which simplifies the process of decision-making. One of the goals is to identify if customers know why they like specific luxury watches and dislike others although they are produced in the same country and cost comparable prices. It is very obvious that the market for luxury watches especially for luxury smart watches is changing way faster than it has been in the past. Therefore the research examines the market changing parameters in detail.Keywords: buying-behaviour, brand management, consumer, luxury watch, smart watch
Procedia PDF Downloads 210938 Multi-Objective Multi-Period Allocation of Temporary Earthquake Disaster Response Facilities with Multi-Commodities
Authors: Abolghasem Yousefi-Babadi, Ali Bozorgi-Amiri, Aida Kazempour, Reza Tavakkoli-Moghaddam, Maryam Irani
Abstract:
All over the world, natural disasters (e.g., earthquakes, floods, volcanoes and hurricanes) causes a lot of deaths. Earthquakes are introduced as catastrophic events, which is accident by unusual phenomena leading to much loss around the world. Such could be replaced by disasters or any other synonyms strongly demand great long-term help and relief, which can be hard to be managed. Supplies and facilities are very important challenges after any earthquake which should be prepared for the disaster regions to satisfy the people's demands who are suffering from earthquake. This paper proposed disaster response facility allocation problem for disaster relief operations as a mathematical programming model. Not only damaged people in the earthquake victims, need the consumable commodities (e.g., food and water), but also they need non-consumable commodities (e.g., clothes) to protect themselves. Therefore, it is concluded that paying attention to disaster points and people's demands are very necessary. To deal with this objective, both commodities including consumable and need non-consumable commodities are considered in the presented model. This paper presented the multi-objective multi-period mathematical programming model regarding the minimizing the average of the weighted response times and minimizing the total operational cost and penalty costs of unmet demand and unused commodities simultaneously. Furthermore, a Chebycheff multi-objective solution procedure as a powerful solution algorithm is applied to solve the proposed model. Finally, to illustrate the model applicability, a case study of the Tehran earthquake is studied, also to show model validation a sensitivity analysis is carried out.Keywords: facility location, multi-objective model, disaster response, commodity
Procedia PDF Downloads 257937 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 120936 Effect of Human Resources Accounting on Financial Performance of Banks in Nigeria
Authors: Oti Ibiam, Alexanda O. Kalu
Abstract:
Human Resource Accounting is the process of identifying and measuring data about human resources and communicating this information to interested parties in order to meaningful investment decisions. In recent time, firms focus has shifted to human resource accounting so as to ensure efficiency and effectiveness in their operations. This study focused on the effect of human resource accounting on the financial performance of Banks in Nigerian. The problem that led to the study revolves around the current trend whereby Nigeria banks do not efficiently account for the input of human resource in their annual statement, thereby instead of capitalizing human resources in their statement of financial position; they expend it in their income statement thereby reducing their profit after tax. The broad objective of this study is to determine the extent to which human resource accounting affects the financial performance and value of Nigerian Banks. This study is therefore considered significant because, there are still universally, grey areas to be sorted out on the subject matter of human resources accounting. In the bid to achieve the study objectives, the researcher gathered data from sixteen commercial banks. Data were collected from both primary and secondary sources using an ex-post facto research design. The data collected were then tabulated and analyzed using the multiple regression analysis. The result of hypothesis one revealed that there is a significant relationship between Capitalized Human Resource Cost and post capitalization Profit before tax of banks in Nigeria. The finding of hypothesis two revealed that the association between Capitalized Human Resource Cost and post capitalization Net worth of banks in Nigeria is significant. The finding in Hypothesis three reveals that there is a significant difference between pre and post capitalization profit before tax of banks in Nigeria. The study concludes that human resources accounting positively influenced financial performance of banks in Nigeria within the period under study. It is recommended that standards should be set for human resources identification and measurement in the banking sector and also the management of commercial banks in Nigeria should have a proper appreciation of human resource accounting. This will enable managers to take right decision regarding investment in human resource. Also, the study recommends that policies on enhancing the post capitalization profit before tax of banks in Nigeria should pay great attention to capitalized human resources cost, net worth and total asset as the variables significantly influenced post capitalization profit before tax of the studied banks in Nigeria. The limitation of the study centers on the limited number of years and companies that was adopted for the study.Keywords: capitalization, human resources cost, profit before tax, net worth
Procedia PDF Downloads 150935 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training
Authors: Biki Sarmah, Priyanko Raj Mudiar
Abstract:
In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator
Procedia PDF Downloads 166934 Experimental Investigation of Beams Having Spring Mass Resonators
Authors: Somya R. Patro, Arnab Banerjee, G. V. Ramana
Abstract:
A flexural beam carrying elastically mounted concentrated masses, such as engines, motors, oscillators, or vibration absorbers, is often encountered in mechanical, civil, and aeronautical engineering domains. To prevent resonance conditions, the designers must predict the natural frequencies of such a constrained beam system. This paper investigates experimental and analytical studies on vibration suppression in a cantilever beam with a tip mass with the help of spring-mass to achieve local resonance conditions. The system consists of a 3D printed polylactic acid (PLA) beam screwed at the base plate of the shaker system. The top of the free end is connected by an accelerometer which also acts as a tip mass. A spring and a mass are attached at the bottom to replicate the mechanism of the spring-mass resonator. The Fast Fourier Transform (FFT) algorithm converts time acceleration plots into frequency amplitude plots from which transmittance is calculated as a function of the excitation frequency. The mathematical formulation is based on the transfer matrix method, and the governing differential equations are based on Euler Bernoulli's beam theory. The experimental results are successfully validated with the analytical results, providing us essential confidence in our proposed methodology. The beam spring-mass system is then converted to an equivalent two-degree of freedom system, from which frequency response function is obtained. The H2 optimization technique is also used to obtain the closed-form expression of optimum spring stiffness, which shows the influence of spring stiffness on the system's natural frequency and vibration response.Keywords: euler bernoulli beam theory, fast fourier transform, natural frequencies, polylactic acid, transmittance, vibration absorbers
Procedia PDF Downloads 105933 Searching SNPs Variants in Myod-1 and Myod-2 Genes Linked to Body Weight in Gilthead Seabream, Sparus aurata L.
Authors: G. Blanco-Lizana, C. García-Fernández, J. A. Sánchez
Abstract:
Growth is a productive trait regulated by a large and complex gene network with very different effect. Some of they (candidate genes) have a higher effect and are excellent resources to search in them polymorphisms correlated with differences in growth rates. This study was focused on the identification of single nucleotide polymorphism (SNP) in MyoD-1 and MyoD-2 genes, members of the family of myogenic regulatory genes with a key role in the differentiation and development of muscular tissue.(MFRs), and its evaluation as potential markers in genetic selection programs for growth in gilthead sea bream (Sparus aurata). Through a sequencing in 30 seabream (classified as unrelated by microsatellite markers) of 1.968bp in MyoD-1 gene [AF478568 .1] and 1.963bp in MyoD-2 gene [AF478569.1], three SNPs were identified in each gene (SaMyoD-1 D2100A (D indicate a deletion) SaMyoD-1 A2143G and SaMyoD-1 A2404G and SaMyoD-2_A785C, SaMyoD-2_C1982T and SaMyoD-2_A2031T). The relationships between SNPs and body weight were evaluated by SNP genotyping of 53 breeders from two broodstocks (A:18♀-9♂; B:16♀-10♂) and 389 offspring divided into two groups (slow- and fast-growth) with significant differences in growth at 18 months of development (A18Slow: N=107, A18Fast: N=103, B18Slow: N=92 and B18Fast: N=87) (Borrell et al., 2011). Haplotype and diplotype were reconstructed from genotype data by Phase 2.1 software. Differences among means of different diplotypes were calculated by one-way ANOVA followed by post-hoc Tukey test. Association analysis indicated that single SNP did not show significant effect on body weight. However, when the analysis is carried out considering haplotype data it was observed that the DGG haplotipe of MyoD-1 gen and CCA haplotipe of MyoD- 2gen were associated to with lower body weight. This haplotype combination always showed the lowest mean body weight (P<0.05) in three (A18Slow, A18Fast & B18Slow) of the four groups tested. Individuals with DGG haplotipe of MyoD-1 gen have a 25,5% and those with CCA haplotipe of MyoD- 2gen showed 14-18% less on mean body weight. Although further studies are need to validate the role of these 3 SNPs as marker for body weight, the polymorphism-trait association established in this work create promising expectations on the use of these variants as genetic tool for future giltead seabream breeding programs.Keywords: growth, MyoD-1 and MyoD-2 genes, selective breeding, SNP-haplotype
Procedia PDF Downloads 332932 Using Rainfall Simulators to Design and Assess the Post-Mining Erosional Stability
Authors: Ashraf M. Khalifa, Hwat Bing So, Greg Maddocks
Abstract:
Changes to the mining environmental approvals process in Queensland have been rolled out under the MERFP Act (2018). This includes requirements for a Progressive Rehabilitation and Closure Plan (PRC Plan). Key considerations of the landform design report within the PRC Plan must include: (i) identification of materials available for landform rehabilitation, including their ability to achieve the required landform design outcomes, (ii) erosion assessments to determine landform heights, gradients, profiles, and material placement, (iii) slope profile design considering the interactions between soil erodibility, rainfall erosivity, landform height, gradient, and vegetation cover to identify acceptable erosion rates over a long-term average, (iv) an analysis of future stability based on the factors described above e.g., erosion and /or landform evolution modelling. ACARP funded an extensive and thorough erosion assessment program using rainfall simulators from 1998 to 2010. The ACARP program included laboratory assessment of 35 soil and spoil samples from 16 coal mines and samples from a gold mine in Queensland using 3 x 0.8 m laboratory rainfall simulator. The reliability of the laboratory rainfall simulator was verified through field measurements using larger flumes 20 x 5 meters and catchment scale measurements at three sites (3 different catchments, average area of 2.5 ha each). Soil cover systems are a primary component of a constructed mine landform. The primary functions of a soil cover system are to sustain vegetation and limit the infiltration of water and oxygen into underlying reactive mine waste. If the external surface of the landform erodes, the functions of the cover system cannot be maintained, and the cover system will most likely fail. Assessing a constructed landform’s potential ‘long-term’ erosion stability requires defensible erosion rate thresholds below which rehabilitation landform designs are considered acceptably erosion-resistant or ‘stable’. The process used to quantify erosion rates using rainfall simulators (flumes) to measure rill and inter-rill erosion on bulk samples under laboratory conditions or on in-situ material under field conditions will be explained.Keywords: open-cut, mining, erosion, rainfall simulator
Procedia PDF Downloads 101