Search results for: fixed jacket offshore platform
315 Thermoplastic-Intensive Battery Trays for Optimum Electric Vehicle Battery Pack Performance
Authors: Dinesh Munjurulimana, Anil Tiwari, Tingwen Li, Carlos Pereira, Sreekanth Pannala, John Waters
Abstract:
With the rapid transition to electric vehicles (EVs) across the globe, car manufacturers are in need of integrated and lightweight solutions for the battery packs of these vehicles. An integral part of a battery pack is the battery tray, which constitutes a significant portion of the pack’s overall weight. Based on the functional requirements, cost targets, and packaging space available, a range of materials –from metals, composites, and plastics– are often used to develop these battery trays. This paper considers the design and development of integrated thermoplastic-intensive battery trays, using the available packaging space from a representative EV battery pack. Presented as a proposed alternative are multiple concepts to integrate several connected systems such as cooling plates and underbody impact protection parts of a multi-piece incumbent battery pack. The resulting digital prototype was evaluated for several mechanical performance measures such as mechanical shock, drop, crush resistance, modal analysis, and torsional stiffness. The performance of this alternative design is then compared with the incumbent solution. In addition, insights are gleaned into how these novel approaches can be optimized to meet or exceed the performance of incumbent designs. Preliminary manufacturing feasibility of the optimal solution using injection molding and other commonly used manufacturing methods for thermoplastics is briefly explained. Then numerical and analytical evaluations are performed to show a representative Pareto front of cost vs. volume of the production parts. The proposed solution is observed to offer weight savings of up to 40% on a component level and part elimination of up to two systems in the battery pack of a typical battery EV while offering the potential to meet the required performance measures highlighted above. These conceptual solutions are also observed to potentially offer secondary benefits such as improved thermal and electrical isolations and be able to achieve complex geometrical features, thus demonstrating the ability to use the complete packaging space available in the vehicle platform considered. The detailed study presented in this paper serves as a valuable reference for researches across the globe working on the development of EV battery packs – especially those with an interest in the potential of employing alternate solutions as part of a mixed-material system to help capture untapped opportunities to optimize performance and meet critical application requirements.Keywords: thermoplastics, lightweighting, part integration, electric vehicle battery packs
Procedia PDF Downloads 205314 The Commodification of Internet Culture: Online Memes and Differing Perceptions of Their Commercial Uses
Authors: V. Esteves
Abstract:
As products of participatory culture, internet memes represent a global form of interaction with online culture. These digital objects draw upon a rich historical engagement with remix practices that dates back decades: from the copy and paste practices of Dadaism and punk to the re-appropriation techniques of the Situationist International; memes echo a long established form of cultural creativity that pivots on the art of the remix. Online culture has eagerly embraced the changes that the Web 2.0 afforded in terms of making use of remixing as an accessible form of societal expression, bridging these remix practices of the past into a more widely available and accessible platform. Memes embody the idea of 'intercreativity', allowing global creative collaboration to take place through networked digital media; they reflect the core values of participation and interaction that are present throughout much internet discourse whilst also existing in a historical remix continuum. Memes hold the power of cultural symbolism manipulated by global audiences through which societies make meaning, as these remixed digital objects have an elasticity and low literacy level that allows for a democratic form of cultural engagement and meaning-making by and for users around the world. However, because memes are so elastic, their ability to be re-appropriated by other powers for reasons beyond their original intention has become evident. Recently, corporations have made use of internet memes for advertising purposes, engaging in the circulation and re-appropriation of internet memes in commercial spaces – which has, in turn, complicated this relation between online users and memes' democratic possibilities further. By engaging in a widespread online ethnography supplemented by in-depth interviews with meme makers, this research was able to not only track different online meme use through commercial contexts, but it also allowed the possibility to engage in qualitative discussions with meme makers and users regarding their perception and experience of these varying commercial uses of memes. These can be broadly put within two categories: internet memes that are turned into physical merchandise and the use of memes in advertising to sell other (non-meme related) products. Whilst there has been considerable acceptance of the former type of commercial meme use, the use of memes in adverts in order to sell unrelated products has been met with resistance. The changes in reception regarding commercial meme use is dependent on ideas of cultural ownership and perceptions of authorship, ultimately uncovering underlying socio-cultural ideologies that come to the fore within these overlapping contexts. Additionally, this adoption of memes by corporate powers echoes the recuperation process that the Situationist International endured, creating a further link with older remix cultures and their lifecycles.Keywords: commodification, internet culture, memes, recuperation, remix
Procedia PDF Downloads 145313 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism
Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii
Abstract:
This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve
Procedia PDF Downloads 278312 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 114311 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 136310 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar
Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh
Abstract:
Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring
Procedia PDF Downloads 180309 Semiotics of the New Commercial Music Paradigm
Authors: Mladen Milicevic
Abstract:
This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.Keywords: music, semiology, commercial, taste
Procedia PDF Downloads 393308 Climate Change and Rural-Urban Migration in Brazilian Semiarid Region
Authors: Linda Márcia Mendes Delazeri, Dênis Antônio Da Cunha
Abstract:
Over the past few years, the evidence that human activities have altered the concentration of greenhouse gases in the atmosphere have become stronger, indicating that this accumulation is the most likely cause of climate change observed so far. The risks associated with climate change, although uncertain, have the potential to increase social vulnerability, exacerbating existing socioeconomic challenges. Developing countries are potentially the most affected by climate change, since they have less potential to adapt and are those most dependent on agricultural activities, one of the sectors in which the major negative impacts are expected. In Brazil, specifically, it is expected that the localities which form the semiarid region are among the most affected, due to existing irregularity in rainfall and high temperatures, in addition to economic and social factors endemic to the region. Given the strategic limitations to handle the environmental shocks caused by climate change, an alternative adopted in response to these shocks is migration. Understanding the specific features of migration flows, such as duration, destination and composition is essential to understand the impacts of migration on origin and destination locations and to develop appropriate policies. Thus, this study aims to examine whether climatic factors have contributed to rural-urban migration in semiarid municipalities in the recent past and how these migration flows will be affected by future scenarios of climate change. The study was based on microeconomic theory of utility maximization, in which, to decide to leave the countryside and move on to the urban area, the individual seeks to maximize its utility. Analytically, we estimated an econometric model using the modeling of Fixed Effects and the results confirmed the expectation that climate drivers are crucial for the occurrence of the rural-urban migration. Also, other drivers of the migration process, as economic, social and demographic factors were also important. Additionally, predictions about the rural-urban migration motivated by variations in temperature and precipitation in the climate change scenarios RCP 4.5 and 8.5 were made for the periods 2016-2035 and 2046-2065, defined by the Intergovernmental Panel on Climate Change (IPCC). The results indicate that there will be increased rural-urban migration in the semiarid region in both scenarios and in both periods. In general, the results of this study reinforce the need for formulations of public policies to avoid migration for climatic reasons, such as policies that give support to the productive activities generating income in rural areas. By providing greater incentives for family agriculture and expanding sources of credit for the farmer, it will have a better position to face climate adversities and to settle in rural areas. Ultimately, if migration becomes necessary, there must be the adoption of policies that seek an organized and planned development of urban areas, considering migration as an adaptation strategy to adverse climate effects. Thus, policies that act to absorb migrants in urban areas and ensure that they have access to basic services offered to the urban population would contribute to the social costs reduction of climate variability.Keywords: climate change, migration, rural productivity, semiarid region
Procedia PDF Downloads 351307 Assessment of Energy Efficiency and Life Cycle Greenhouse Gas Emission of Wheat Production on Conservation Agriculture to Achieve Soil Carbon Footprint in Bangladesh
Authors: MD Mashiur Rahman, Muhammad Arshadul Haque
Abstract:
Emerging conservation agriculture (CA) is an option for improving soil health and maintaining environmental sustainability for intensive agriculture, especially in the tropical climate. Three years lengthy research experiment was performed in arid climate from 2018 to 2020 at research field of Bangladesh Agricultural Research Station (RARS)F, Jamalpur (soil texture belongs to Agro-Ecological Zone (AEZ)-8/9, 24˚56'11''N latitude and 89˚55'54''E longitude and an altitude of 16.46m) to evaluate the effect of CA approaches on energy use efficiency and a streamlined life cycle greenhouse gas (GHG) emission of wheat production. For this, the conservation tillage practices (strip tillage (ST) and minimum tillage (MT)) were adopted in comparison to the conventional farmers' tillage (CT), with retained a fixed level (30 cm) of residue retention. This study examined the relationship between energy consumption and life cycle greenhouse gas (GHG) emission of wheat cultivation in Jamalpur region of Bangladesh. Standard energy equivalents megajoules (MJ) were used to measure energy from different inputs and output, similarly, the global warming potential values for the 100-year timescale and a standard unit kilogram of carbon dioxide equivalent (kg CO₂eq) was used to estimate direct and indirect GHG emissions from the use of on-farm and off-farm inputs. Farm efficiency analysis tool (FEAT) was used to analyze GHG emission and its intensity. A non-parametric data envelopment (DEA) analysis was used to estimate the optimum energy requirement of wheat production. The results showed that the treatment combination having MT with optimum energy inputs is the best suit for cost-effective, sustainable CA practice in wheat cultivation without compromising with the yield during the dry season. A total of 22045.86 MJ ha⁻¹, 22158.82 MJ ha⁻¹, and 23656.63 MJ ha⁻¹ input energy for the practice of ST, MT, and CT was used in wheat production, and output energy was calculated as 158657.40 MJ ha⁻¹, 162070.55 MJ ha⁻¹, and 149501.58 MJ ha⁻¹, respectively; where energy use efficiency/net energy ratio was found to be 7.20, 7.31 and 6.32. Among these, MT is the most effective practice option taken into account in the wheat production process. The optimum energy requirement was found to be 18236.71 MJ ha⁻¹ demonstrating for the practice of MT that if recommendations are followed, 18.7% of input energy can be saved. The total greenhouse gas (GHG) emission was calculated to be 2288 kgCO₂eq ha⁻¹, 2293 kgCO₂eq ha⁻¹ and 2331 kgCO₂eq ha⁻¹, where GHG intensity is the ratio of kg CO₂eq emission per MJ of output energy produced was estimated to be 0.014 kg CO₂/MJ, 0.014 kg CO₂/MJ and 0.015 kg CO₂/MJ in wheat production. Therefore, CA approaches ST practice with 30 cm residue retention was the most effective GHG mitigation option when the net life cycle GHG emission was considered in wheat production in the silt clay loam soil of Bangladesh. In conclusion, the CA approaches being implemented for wheat production involving MT practice have the potential to mitigate global warming potential in Bangladesh to achieve soil carbon footprint, where the life cycle assessment approach needs to be applied to a more diverse range of wheat-based cropping systems.Keywords: conservation agriculture and tillage, energy use efficiency, life cycle GHG, Bangladesh
Procedia PDF Downloads 103306 The Effect of Social Media Influencer on Boycott Participation through Attitude toward the Offending Country in a Situational Animosity Context
Authors: Hsing-Hua Stella Chang, Mong-Ching Lin, Cher-Min Fong
Abstract:
Using surrogate boycotts as a coercive tactic to force the offending party into changing its approaches has been increasingly significant over the last several decades, and is expected to increase in the future. Research shows that surrogate boycotts are often triggered by controversial international events, and particular foreign countries serve as the offending party in the international marketplace. In other words, multinational corporations are likely to become surrogate boycott targets in overseas markets because of the animosity between their home and host countries. Focusing on the surrogate boycott triggered by a severe situation animosity, this research aims to examine how social media influencers (SMIs) serving as electronic key opinion leaders (EKOLs) in an international crisis facilitate and organize a boycott, and persuade consumers to participate in the boycott. This research suggests that SMIs could be a particularly important information source in a surrogate boycott sparked by a situation of animosity. This research suggests that under such a context, SMIs become a critical information source for individuals to enhance and update their understanding of the event because, unlike traditional media, social media serve as a platform for instant and 24-hour non-stop information access and dissemination. The Xinjiang cotton event was adopted as the research context, which was viewed as an ongoing inter-country conflict, reflecting a crisis, which provokes animosity against the West. Through online panel services, both studies recruited Mainland Chinese nationals to be respondents to the surveys. The findings show that: 1. Social media influencer message is positively related to a negative attitude toward the offending country. 2. Attitude toward the offending country is positively related to boycotting participation. To address the unexplored question – of the effect of social media influencer influence on consumer participation in boycotts, this research presents a finer-grained examination of boycott motivation, with a special focus on a situational animosity context. This research is split into two interrelated parts. In the first part, this research shows that attitudes toward the offending country can be socially constructed by the influence of social media influencers in a situational animosity context. The study results show that consumers perceive different strengths of social pressure related to various levels of influencer messages and thus exhibit different levels of attitude toward the offending country. In the second part, this research further investigates the effect of attitude toward the offending country on boycott participation. The study findings show that such attitude exacerbated the effect of social media influencer messages on boycott participation in a situation of animosity.Keywords: animosity, social media marketing, boycott, attitude toward the offending country
Procedia PDF Downloads 112305 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 190304 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen
Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev
Abstract:
The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms
Procedia PDF Downloads 91303 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case
Authors: Henry Acurio, Alvaro Corral, Juan Fonseca
Abstract:
In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility: 1) A Business as Usual (BAU) scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios, buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro-mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies by the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP), and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.Keywords: electro mobility, energy, policy, sustainable transportation
Procedia PDF Downloads 82302 Measuring Emotion Dynamics on Facebook: Associations between Variability in Expressed Emotion and Psychological Functioning
Authors: Elizabeth M. Seabrook, Nikki S. Rickard
Abstract:
Examining time-dependent measures of emotion such as variability, instability, and inertia, provide critical and complementary insights into mental health status. Observing changes in the pattern of emotional expression over time could act as a tool to identify meaningful shifts between psychological well- and ill-being. From a practical standpoint, however, examining emotion dynamics day-to-day is likely to be burdensome and invasive. Utilizing social media data as a facet of lived experience can provide real-world, temporally specific access to emotional expression. Emotional language on social media may provide accurate and sensitive insights into individual and community mental health and well-being, particularly with focus placed on the within-person dynamics of online emotion expression. The objective of the current study was to examine the dynamics of emotional expression on the social network platform Facebook for active users and their relationship with psychological well- and ill-being. It was expected that greater positive and negative emotion variability, instability, and inertia would be associated with poorer psychological well-being and greater depression symptoms. Data were collected using a smartphone app, MoodPrism, which delivered demographic questionnaires, psychological inventories assessing depression symptoms and psychological well-being, and collected the Status Updates of consenting participants. MoodPrism also delivered an experience sampling methodology where participants completed items assessing positive affect, negative affect, and arousal, daily for a 30-day period. The number of positive and negative words in posts was extracted and automatically collated by MoodPrism. The relative proportion of positive and negative words from the total words written in posts was then calculated. Preliminary analyses have been conducted with the data of 9 participants. While these analyses are underpowered due to sample size, they have revealed trends that greater variability in the emotion valence expressed in posts is positively associated with greater depression symptoms (r(9) = .56, p = .12), as is greater instability in emotion valence (r(9) = .58, p = .099). Full data analysis utilizing time-series techniques to explore the Facebook data set will be presented at the conference. Identifying the features of emotion dynamics (variability, instability, inertia) that are relevant to mental health in social media emotional expression is a fundamental step in creating automated screening tools for mental health that are temporally sensitive, unobtrusive, and accurate. The current findings show how monitoring basic social network characteristics over time can provide greater depth in predicting risk and changes in depression and positive well-being.Keywords: emotion, experience sampling methods, mental health, social media
Procedia PDF Downloads 250301 Option Pricing Theory Applied to the Service Sector
Authors: Luke Miller
Abstract:
This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.Keywords: option pricing theory, real options, service sector, valuation
Procedia PDF Downloads 355300 The Use of Stroke Journey Map in Improving Patients' Perceived Knowledge in Acute Stroke Unit
Authors: C. S. Chen, F. Y. Hui, B. S. Farhana, J. De Leon
Abstract:
Introduction: Stroke can lead to long-term disability, affecting one’s quality of life. Providing stroke education to patient and family members is essential to optimize stroke recovery and prevent recurrent stroke. Currently, nurses conduct stroke education by handing out pamphlets and explaining their contents to patients. However, this is not always effective as nurses have varying levels of knowledge and depth of content discussed with the patient may not be consistent. With the advancement of information technology, health education is increasingly being disseminated via electronic software and studies have shown this to have benefitted patients. Hence, a multi-disciplinary team consisting of doctors, nurses and allied health professionals was formed to create the stroke journey map software to deliver consistent and concise stroke education. Research Objectives: To evaluate the effectiveness of using a stroke journey map software in improving patients’ perceived knowledge in the acute stroke unit during hospitalization. Methods: Patients admitted to the acute stroke unit were given stroke journey map software during patient education. The software consists of 31 interactive slides that are brightly coloured and 4 videos, based on input provided by the multi-disciplinary team. Participants were then assessed with pre-and-post survey questionnaires before and after viewing the software. The questionnaire consists of 10 questions with a 5-point Likert scale which sums up to a total score of 50. The inclusion criteria are patients diagnosed with ischemic stroke and are cognitively alert and oriented. This study was conducted between May 2017 to October 2017. Participation was voluntary. Results: A total of 33 participants participated in the study. The results demonstrated that the use of a stroke journey map as a stroke education medium was effective in improving patients’ perceived knowledge. A comparison of pre- and post-implementation data of stroke journey map revealed an overall mean increase in patients’ perceived knowledge from 24.06 to 40.06. The data is further broken down to evaluate patients’ perceived knowledge in 3 domains: (1) Understanding of disease process; (2) Management and treatment plans; (3) Post-discharge care. Each domain saw an increase in mean score from 10.7 to 16.2, 6.9 to 11.9 and 6.6 to 11.7 respectively. Project Impact: The implementation of stroke journey map has a positive impact in terms of (1) Increasing patient’s perceived knowledge which could contribute to greater empowerment of health; (2) Reducing need for stroke education material printouts making it environmentally friendly; (3) Decreasing time nurses spent on giving education resulting in more time to attend to patients’ needs. Conclusion: This study has demonstrated the benefit of using stroke journey map as a platform for stroke education. Overall, it has increased patients’ perceived knowledge in understanding their disease process, the management and treatment plans as well as the discharge process.Keywords: acute stroke, education, ischemic stroke, knowledge, stroke
Procedia PDF Downloads 161299 The Role of Social Media in the Rise of Islamic State in India: An Analytical Overview
Authors: Yasmeen Cheema, Parvinder Singh
Abstract:
The evolution of Islamic State (acronym IS) has an ultimate goal of restoring the caliphate. IS threat to the global security is main concern of international community but has also raised a factual concern for India about the regular radicalization of IS ideology among Indian youth. The incident of joining Arif Ejaz Majeed, an Indian as ‘jihadist’ in IS has set strident alarm in law & enforcement agencies. On 07.03.2017, many people were injured in an Improvised Explosive Device (IED) blast on-board of Bhopal Ujjain Express. One perpetrator of this incident was killed in encounter with police. But, the biggest shock is that the conspiracy was pre-planned and the assailants who carried out the blast were influenced by the ideology perpetrated by the Islamic State. This is the first time name of IS has cropped up in a terror attack in India. It is a red indicator of violent presence of IS in India, which is spreading through social media. The IS have the capacity to influence the younger Muslim generation in India through its brutal and aggressive propaganda videos, social media apps and hatred speeches. It is a well known fact that India is on the radar of IS, as well on its ‘Caliphate Map’. IS uses Twitter, Facebook and other social media platforms constantly. Islamic State has used enticing videos, graphics, and articles on social media and try to influence persons from India & globally that their jihad is worthy. According to arrested perpetrator of IS in different cases in India, the most of Indian youths are victims to the daydreams which are fondly shown by IS. The dreams that the Muslim empire as it was before 1920 can come back with all its power and also that the Caliph and its caliphate can be re-established are shown by the IS. Indian Muslim Youth gets attracted towards these euphemistic ideologies. Islamic State has used social media for disseminating its poisonous ideology, recruitment, operational activities and for future direction of attacks. IS through social media inspired its recruits & lone wolfs to continue to rely on local networks to identify targets and access weaponry and explosives. Recently, a pro-IS media group on its Telegram platform shows Taj Mahal as the target and suggested mode of attack as a Vehicle Born Improvised Explosive Attack (VBIED). Islamic State definitely has the potential to destroy the Indian national security & peace, if timely steps are not taken. No doubt, IS has used social media as a critical mechanism for recruitment, planning and executing of terror attacks. This paper will therefore examine the specific characteristics of social media that have made it such a successful weapon for Islamic State. The rise of IS in India should be viewed as a national crisis and handled at the central level with efficient use of modern technology.Keywords: ideology, India, Islamic State, national security, recruitment, social media, terror attack
Procedia PDF Downloads 230298 Techno-Economic Analysis of 1,3-Butadiene and ε-Caprolactam Production from C6 Sugars
Authors: Iris Vural Gursel, Jonathan Moncada, Ernst Worrell, Andrea Ramirez
Abstract:
In order to achieve the transition from a fossil to bio-based economy, biomass needs to replace fossil resources in meeting the world’s energy and chemical needs. This calls for development of biorefinery systems allowing cost-efficient conversion of biomass to chemicals. In biorefinery systems, feedstock is converted to key intermediates called platforms which are converted to wide range of marketable products. The C6 sugars platform stands out due to its unique versatility as precursor for multiple valuable products. Among the different potential routes from C6 sugars to bio-based chemicals, 1,3-butadiene and ε-caprolactam appear to be of great interest. Butadiene is an important chemical for the production of synthetic rubbers, while caprolactam is used in production of nylon-6. In this study, ex-ante techno-economic performance of 1,3-butadiene and ε-caprolactam routes from C6 sugars were assessed. The aim is to provide insight from an early stage of development into the potential of these new technologies, and the bottlenecks and key cost-drivers. Two cases for each product line were analyzed to take into consideration the effect of possible changes on the overall performance of both butadiene and caprolactam production. Conceptual process design for the processes was developed using Aspen Plus based on currently available data from laboratory experiments. Then, operating and capital costs were estimated and an economic assessment was carried out using Net Present Value (NPV) as indicator. Finally, sensitivity analyses on processing capacity and prices was done to take into account possible variations. Results indicate that both processes perform similarly from an energy intensity point of view ranging between 34-50 MJ per kg of main product. However, in terms of processing yield (kg of product per kg of C6 sugar), caprolactam shows higher yield by a factor 1.6-3.6 compared to butadiene. For butadiene production, with the economic parameters used in this study, for both cases studied, a negative NPV (-642 and -647 M€) was attained indicating economic infeasibility. For the caprolactam production, one of the cases also showed economic infeasibility (-229 M€), but the case with the higher caprolactam yield resulted in a positive NPV (67 M€). Sensitivity analysis indicated that the economic performance of caprolactam production can be improved with the increase in capacity (higher C6 sugars intake) reflecting benefits of the economies of scale. Furthermore, humins valorization for heat and power production was considered and found to have a positive effect. Butadiene production was found sensitive to the price of feedstock C6 sugars and product butadiene. However, even at 100% variation of the two parameters, butadiene production remained economically infeasible. Overall, the caprolactam production line shows higher economic potential in comparison to that of butadiene. The results are useful in guiding experimental research and providing direction for further development of bio-based chemicals.Keywords: bio-based chemicals, biorefinery, C6 sugars, economic analysis, process modelling
Procedia PDF Downloads 152297 Impact of Climatic Hazards on the Jamuna River Fisheries and Coping and Adaptation Strategies
Authors: Farah Islam, Md. Monirul Islam, Mosammat Salma Akter, Goutam Kumar Kundu
Abstract:
The continuous variability of climate and the risk associated with it have a significant impact on the fisheries leading to a global concern for about half a billion fishery-based livelihoods. Though in the context of Bangladesh mounting evidence on the impacts of climate change on fishery-based livelihoods or their socioeconomic conditions are present, the country’s inland fisheries sector remains in a negligible corner as compared to the coastal areas which are spotted on the highlight due to its higher vulnerability to climatic hazards. The available research on inland fisheries, particularly river fisheries, has focussed mainly on fish production, pollution, fishing gear, fish biodiversity and livelihoods of the fishers. This study assesses the impacts of climate variability and changes on the Jamuna (a transboundary river called Brahmaputra in India) River fishing communities and their coping and adaptation strategies. This study has used primary data collected from Kalitola Ghat and Debdanga fishing communities of the Jamuna River during May, August and December 2015 using semi-structured interviews, oral history interviews, key informant interviews, focus group discussions and impact matrix as well as secondary data. This study has found that both communities are exposed to storms, floods and land erosions which impact on fishery-based livelihood assets, strategies, and outcomes. The impact matrix shows that human and physical capitals are more affected by climate hazards which in turn affect financial capital. Both communities have been responding to these exposures through multiple coping and adaptation strategies. The coping strategies include making dam with soil, putting jute sac on the yard, taking shelter on boat or embankment, making raised platform or ‘Kheua’ and involving with temporary jobs. While, adaptation strategies include permanent migration, change of livelihood activities and strategies, changing fishing practices and making robust houses. The study shows that migration is the most common adaptation strategy for the fishers which resulted in mostly positive outcomes for the migrants. However, this migration has impacted negatively on the livelihoods of existing fishers in the communities. In sum, the Jamuna river fishing communities have been impacted by several climatic hazards and they have traditionally coped with or adapted to the impacts which are not sufficient to maintain sustainable livelihoods and fisheries. In coming decades, this situation may become worse as predicted by latest scientific research and an enhanced level of response would be needed.Keywords: climatic hazards, impacts and adaptation, fisherfolk, the Jamuna River
Procedia PDF Downloads 319296 Activation of Apoptosis in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Exposed to Various Cadmium Concentration
Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska
Abstract:
The digestive system of insects is composed of three distinct regions: fore-, mid- and hingut. The middle region (the midgut) is treated as one of the barriers which protects the organism against any stressors which originate from external environment, e.g. toxic metals. Such factors can activate the cell death in epithelial cells to preserve the entire tissue/organs against the degeneration. Different mechanisms involved in homeostasis maintenance have been described, but the studies of animals under field conditions do not give the opportunity to conclude about potential ability of subsequent generation to inherit the tolerance mechanisms. It is possible only by a multigenerational strain of an animal led under laboratory conditions, exposed to a selected toxic factor, present also in polluted ecosystems. The main purpose of the project was to check if changes, which appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects with the special emphasis on apoptosis. As the animal for these studies we chose 5th larval stage of the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), which is one of pest of many vegetable crops. Animals were divided into some experimental groups: K, Cd, KCd, Cd1, Cd2, Cd3. A control group (K) fed a standard diet, and was conducted for XX generations, a cadmium group (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for XXX generations. A reference Cd group (KCd) has been initiated: control insects were fed with Cd supplemented diet (44 mg Cd per kg of dry weight of food). Experimental groups Cd1, Cd2, Cd3 developed from the control one: 5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food. We were interested in the activation of apoptosis during following generations in all experimental groups. Therefore, during the 1st year of the experiment, the measurements were done for 6 generations in all experimental group. The intensity and the course of apoptosis have been examined using transmission electron microscope (TEM), confocal microscope and flow cytometry. During apoptosis the cell started to shrink, extracellular spaces appeared between digestive and neighboring cells, the nucleus achieved a lobular shape. Eventually, the apoptotic cells was discharged into the midgut lumen. A quantitative analysis revealed that the number of apoptotic cells depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the following 6 generations, we observed that the percentage of apoptotic cells in the midguts from cadmium-exposed groups decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. At the same time, it was still higher than the percentage of apoptotic cells in the same tissues of the insects from the control and multigenerational cadmium strain. The results of our studies suggest that changes caused by cadmium treatment were preserved during 6-generational development of lepidopteran larvae. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.Keywords: cadmium, cell death, digestive system, ultrastructure
Procedia PDF Downloads 214295 Ascribing Identities and Othering: A Multimodal Discourse Analysis of a BBC Documentary on YouTube
Authors: Shomaila Sadaf, Margarethe Olbertz-Siitonen
Abstract:
This study looks at identity and othering in discourses around sensitive issues in social media. More specifically, the study explores the multimodal resources and narratives through which the other is formed, and identities are ascribed in online spaces. As an integral part of social life, media spaces have become an important site for negotiating and ascribing identities. In line with recent research, identity is seen hereas constructions of belonging which go hand in hand with processes of in- and out-group formations that in some cases may lead to othering. Previous findings underline that identities are neither fixed nor limited but rather contextual, intersectional, and interactively achieved. The goal of this study is to explore and develop an understanding of how people co-construct the ‘other’ and ascribe certain identities in social media using multiple modes. In the beginning of the year 2018, the British government decided to include relationships, sexual orientation, and sex education into the curriculum of state funded primary schools. However, the addition of information related to LGBTQ+in the curriculum has been met with resistance, particularly from religious parents.For example, the British Muslim community has voiced their concerns and protested against the actions taken by the British government. YouTube has been used by news companies to air video stories covering the protest and narratives of the protestors along with the position ofschool officials. The analysis centers on a YouTube video dealing with the protest ofa local group of parents against the addition of information about LGBTQ+ in the curriculum in the UK. The video was posted in 2019. By the time of this study, the videos had approximately 169,000 views andaround 6000 comments. In deference to multimodal nature of YouTube videos, this study utilizes multimodal discourse analysis as a method of choice. The study is still ongoing and therefore has not yet yielded any final results. However, the initial analysis indicates a hierarchy of ascribing identities in the data. Drawing on multimodal resources, the media works with social categorizations throughout the documentary, presenting and classifying involved conflicting parties in the light of their own visible and audible identifications. The protesters can be seen to construct a strong group identity as Muslim parents (e.g., clothing and reference to shared values). While the video appears to be designed as a documentary that puts forward facts, the media does not seem to succeed in taking a neutral position consistently throughout the video. At times, the use of images, soundsand language contributes to the formation of “us” vs. “them”, where the audience is implicitly encouraged to pick a side. Only towards the end of the documentary this problematic opposition is addressed and critically reflected through an expert interview that is – interestingly – visually located outside the previously presented ‘battlefield’. This study contributes to the growing understanding of the discursive construction of the ‘other’ in social media. Videos available online are a rich source for examining how the different social actors ascribe multiple identities and form the other.Keywords: identity, multimodal discourse analysis, othering, youtube
Procedia PDF Downloads 114294 A Mathematical Model for Studying Landing Dynamics of a Typical Lunar Soft Lander
Authors: Johns Paul, Santhosh J. Nalluveettil, P. Purushothaman, M. Premdas
Abstract:
Lunar landing is one of the most critical phases of lunar mission. The lander is provided with a soft landing system to prevent structural damage of lunar module by absorbing the landing shock and also assure stability during landing. Presently available software are not capable to simulate the rigid body dynamics coupled with contact simulation and elastic/plastic deformation analysis. Hence a separate mathematical model has been generated for studying the dynamics of a typical lunar soft lander. Parameters used in the analysis includes lunar surface slope, coefficient of friction, initial touchdown velocity (vertical and horizontal), mass and moment of inertia of lander, crushing force due to energy absorbing material in the legs, number of legs and geometry of lander. The mathematical model is capable to simulate plastic and elastic deformation of honey comb, frictional force between landing leg and lunar soil, surface contact simulation, lunar gravitational force, rigid body dynamics and linkage dynamics of inverted tripod landing gear. The non linear differential equations generated for studying the dynamics of lunar lander is solved by numerical method. Matlab programme has been used as a computer tool for solving the numerical equations. The position of each kinematic joint is defined by mathematical equations for the generation of equation of motion. All hinged locations are defined by position vectors with respect to body fixed coordinate. The vehicle rigid body rotations and motions about body coordinate are only due to the external forces and moments arise from footpad reaction force due to impact, footpad frictional force and weight of vehicle. All these force are mathematically simulated for the generation of equation of motion. The validation of mathematical model is done by two different phases. First phase is the validation of plastic deformation of crushable elements by employing conservation of energy principle. The second phase is the validation of rigid body dynamics of model by simulating a lander model in ADAMS software after replacing the crushable elements to elastic spring element. Simulation of plastic deformation along with rigid body dynamics and contact force cannot be modeled in ADAMS. Hence plastic element of primary strut is replaced with a spring element and analysis is carried out in ADAMS software. The same analysis is also carried out using the mathematical model where the simulation of honeycomb crushing is replaced by elastic spring deformation and compared the results with ADAMS analysis. The rotational motion of linkages and 6 degree of freedom motion of lunar Lander about its CG can be validated by ADAMS software by replacing crushing element to spring element. The model is also validated by the drop test results of 4 leg lunar lander. This paper presents the details of mathematical model generated and its validation.Keywords: honeycomb, landing leg tripod, lunar lander, primary link, secondary link
Procedia PDF Downloads 352293 Synthesis of Functionalized-2-Aryl-2, 3-Dihydroquinoline-4(1H)-Ones via Fries Rearrangement of Azetidin-2-Ones
Authors: Parvesh Singh, Vipan Kumar, Vishu Mehra
Abstract:
Quinoline-4-ones represent an important class of heterocyclic scaffolds that have attracted significant interest due to their various biological and pharmacological activities. This heterocyclic unit also constitutes an integral component in drugs used for the treatment of neurodegenerative diseases, sleep disorders and in antibiotics viz. norfloxacin and ciprofloxacin. The synthetic accessibility and possibility of fictionalization at varied positions in quinoline-4-ones exemplifies an elegant platform for the designing of combinatorial libraries of functionally enriched scaffolds with a range of pharmacological profles. They are also considered to be attractive precursors for the synthesis of medicinally imperative molecules such as non-steroidal androgen receptor antagonists, antimalarial drug Chloroquine and martinellines with antibacterial activity. 2-Aryl-2,3-dihydroquinolin-4(1H)-ones are present in many natural and non-natural compounds and are considered to be the aza-analogs of favanones. The β-lactam class of antibiotics is generally recognized to be a cornerstone of human health care due to the unparalleled clinical efficacy and safety of this type of antibacterial compound. In addition to their biological relevance as potential antibiotics, β-lactams have also acquired a prominent place in organic chemistry as synthons and provide highly efficient routes to a variety of non-protein amino acids, such as oligopeptides, peptidomimetics, nitrogen-heterocycles, as well as biologically active natural and unnatural products of medicinal interest such as indolizidine alkaloids, paclitaxel, docetaxel, taxoids, cyptophycins, lankacidins, etc. A straight forward route toward the synthesis of quinoline-4-ones via the triflic acid assisted Fries rearrangement of N-aryl-βlactams has been reported by Tepe and co-workers. The ring expansion observed in this case was solely attributed to the inherent ring strain in β-lactam ring because -lactam failed to undergo rearrangement under reaction conditions. Theabovementioned protocol has been recently extended by our group for the synthesis of benzo[b]-azocinon-6-ones via a tandem Michael addition–Fries rearrangement of sorbyl anilides as well as for the single-pot synthesis of 2-aryl-quinolin-4(3H)-ones through the Fries rearrangement of 3-dienyl-βlactams. In continuation with our synthetic endeavours with the β-lactam ring and in view of the lack of convenient approaches for the synthesis of C-3 functionalized quinolin-4(1H)-ones, the present work describes the single-pot synthesis of C-3 functionalized quinolin-4(1H)-ones via the trific acid promoted Fries rearrangement of C-3 vinyl/isopropenyl substituted β-lactams. In addition, DFT calculations and MD simulations were performed to investigate the stability profles of synthetic compounds.Keywords: dihydroquinoline, fries rearrangement, azetidin-2-ones, quinoline-4-ones
Procedia PDF Downloads 250292 Therapy Finding and Perspectives on Limbic Resonance in Gifted Adults
Authors: Andreas Aceranti, Riccardo Dossena, Marco Colorato, Simonetta Vernocchi
Abstract:
By the term “limbic resonance,” we usually refer to a state of deep connection, both emotional and physiological, between people who, when in resonance, find their limbic systems in tune with one another. Limbic resonance is not only about sharing emotions but also physiological states. In fact, people in such resonance can influence each other’s heart rate, blood pressure, and breathing. Limbic resonance is fundamental for human beings to connect and create deep bonds among a certain group. It is fundamental for our social skills. A relationship between gifted and resonant subjects is perceived as feeling safe, living the relation like an isle of serenity where it is possible to recharge, to communicate without words, to understand each others without giving explanations, to strengthen the balance of each member of the group. Within the circle, self-esteem is consolidated and makes it stronger to face what is outside, others, and reality. The idea that gifted people who are together may be unfit for the world does not correspond to the truth. The circle made up of people with high cognitive potential characterized by a limbic resonance is, in general, experienced as a solid platform from which you can safely move away and where you can return to recover strength. We studied 8 adults (between 21 and 47 years old). All of them with IQ higher than 130. We monitored their brain waves frequency (alpha, beta, theta, gamma, delta) by means of biosensing tracker along with their physiological states (heart beat frequency, blood pressure, breathing frequency, pO2, pCO2) and some blood works only (5-HT, dopamine, catecholamines, cortisol). The subjects of the study were asked to adhere to a protocol involving bonding activities (such as team building activities), role plays, meditation sessions, and group therapy. All these activities were carried out together. We observed that after about 4 months of activities, their brain waves frequencies tended to tune quicker and quicker. After 9 months, the bond among them was so important that they could “sense” each other inner states and sometimes also guess each others’ thoughts. According to our findings, it may be hypothesized that large synchronized outbursts of cortex neurons produces not only brain waves but also electromagnetic fields that may be able to influence the cortical neurons’ activity of other people’s brain by inducing action potentials in large groups of neurons and this is reasonably conceivable to be able to transmit information such as different emotions and cognition cues to the other’s brain. We also believe that upcoming research should focus on clarifying the role of brain magnetic particles in brain-to-brain communication. We also believe that further investigations should be carried out on the presence and role of cryptochromes to evaluate their potential roles in direct brain-to-brain communication.Keywords: limbic resonance, psychotherapy, brain waves, emotion regulation, giftedness
Procedia PDF Downloads 92291 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 130290 Outdoor Thermal Comfort Strategies: The Case of Cool Facades
Authors: Noelia L. Alchapar, Cláudia C. Pezzuto, Erica N. Correa
Abstract:
Mitigating urban overheating is key to achieving the environmental and energy sustainability of cities. The management of the optical properties of the materials that make up the urban envelope -roofing, pavement, and facades- constitutes a profitable and effective tool to improve the urban microclimate and rehabilitate urban areas. Each material that makes up the urban envelope has a different capacity to reflect received solar radiation, which alters the fraction of solar radiation absorbed by the city. However, the paradigm of increasing solar reflectance in all areas of the city without distinguishing their relative position within the urban canyon can cause serious problems of overheating and discomfort among its inhabitants. The hypothesis that supports the research postulates that not all reflective technologies that contribute to urban radiative cooling favor the thermal comfort conditions of pedestrians to equal measure. The objective of this work is to determine to what degree the management of the optical properties of the facades modifies outdoor thermal comfort, given that the mitigation potential of materials with high reflectance in facades is strongly conditioned by geographical variables and by the geometric characteristics of the urban profile aspect ratio (H/W). This research was carried out under two climatic contexts, that of the city of Mendoza-Argentina and that of the city of Campinas-Brazil, according to the Köppen climate classification: BWk and Cwa, respectively. Two areas in two different climatic contexts (Mendoza - Argentina and Campinas - Brazil) were selected. Both areas have comparable urban morphology patterns. These areas are located in a region with low horizontal building density and residential zoning. The microclimatic conditions were monitored during the summer period with temperature and humidity fixed sensors inside vial channels. The microclimate model was simulated in ENVI-Met V5. A grid resolution of 3.5 x 3.5 x 3.5m was used for both cities, totaling an area of 145x145x30 grids. Based on the validated theoretical model, ten scenarios were simulated, modifying the height of buildings and the solar reflectivity of facades. The solar reflectivity façades ranges were: low (0.3) and high (0.75). The density scenarios range from 1th to the 5th level. The study scenarios' performance was assessed by comparing the air temperature, physiological equivalent temperature (PET), and thermal climate index (UTCI). As a result, it is observed that the behavior of the materials of the urban outdoor space depends on complex interactions. Many urban environmental factors influence including constructive characteristics, urban morphology, geographic locations, local climate, and so forth. The role of the vertical urban envelope is decisive for the reduction of urban overheating. One of the causes of thermal gain is the multiple reflections within the urban canyon, which affects not only the air temperature but also the pedestrian thermal comfort. One of the main findings of this work leads to the remarkable importance of considering both the urban warming and the thermal comfort aspects of pedestrians in urban mitigation strategies.Keywords: materials facades, solar reflectivity, thermal comfort, urban cooling
Procedia PDF Downloads 92289 Implementing a Prevention Network for the Ortenaukreis
Authors: Klaus Froehlich-Gildhoff, Ullrich Boettinger, Katharina Rauh, Angela Schickler
Abstract:
The Prevention Network Ortenaukreis, PNO, funded by the German Ministry of Education and Research, aims to promote physical and mental health as well as the social inclusion of 3 to 10 years old children and their families in the Ortenau district. Within a period of four years starting 11/2014 a community network will be established. One regional and five local prevention representatives are building networks with stakeholders of the prevention and health promotion field bridging the health care, educational and youth welfare system in a multidisciplinary approach. The regional prevention representative implements regularly convening prevention and health conferences. On a local level, the 5 local prevention representatives implement round tables in each area as a platform for networking. In the setting approach, educational institutions are playing a vital role when gaining access to children and their families. Thus the project will offer 18 month long organizational development processes with specially trained coaches to 25 kindergarten and 25 primary schools. The process is based on a curriculum of prevention and health promotion which is adapted to the specific needs of the institutions. Also to ensure that the entire region is reached demand oriented advanced education courses are implemented at participating day care centers, kindergartens and schools. Evaluation method: The project is accompanied by an extensive research design to evaluate the outcomes of different project components such as interview data from community prevention agents, interviews and network analysis with families at risk on their support structures, data on community network development and monitoring, as well as data from kindergarten and primary schools. The latter features a waiting-list control group evaluation in kindergarten and primary schools with a mixed methods design using questionnaires and interviews with pedagogues, teachers, parents, and children. Results: By the time of the conference pre and post test data from the kindergarten samples (treatment and control group) will be presented, as well as data from the first project phase, such as qualitative interviews with the prevention coordinators as well as mixed methods data from the community needs assessment. In supporting this project, the Federal Ministry aims to gain insight into efficient components of community prevention and health promotion networks as it is implemented and evaluated. The district will serve as a model region, so that successful components can be transferred to other regions throughout Germany. Accordingly, the transferability to other regions is of high interest in this project.Keywords: childhood research, health promotion, physical health, prevention network, psychological well-being, social inclusion
Procedia PDF Downloads 222288 Evaluating Urban City Indices: A Study for Investigating Functional Domains, Indicators and Integration Methods
Authors: Fatih Gundogan, Fatih Kafali, Abdullah Karadag, Alper Baloglu, Ersoy Pehlivan, Mustafa Eruyar, Osman Bayram, Orhan Karademiroglu, Wasim Shoman
Abstract:
Nowadays many cities around the world are investing their efforts and resources for the purpose of facilitating their citizen’s life and making cities more livable and sustainable by implementing newly emerged phenomena of smart city. For this purpose, related research institutions prepare and publish smart city indices or benchmarking reports aiming to measure the city’s current ‘smartness’ status. Several functional domains, various indicators along different selection and calculation methods are found within such indices and reports. The selection criteria varied for each institution resulting in inconsistency in the ranking and evaluating. This research aims to evaluate the impact of selecting such functional domains, indicators and calculation methods which may cause change in the rank. For that, six functional domains, i.e. Environment, Mobility, Economy, People, Living and governance, were selected covering 19 focus areas and 41 sub-focus (variable) areas. 60 out of 191 indicators were also selected according to several criteria. These were identified as a result of extensive literature review for 13 well known global indices and research and the ISO 37120 standards of sustainable development of communities. The values of the identified indicators were obtained from reliable sources for ten cities. The values of each indicator for the selected cities were normalized and standardized to objectively investigate the impact of the chosen indicators. Moreover, the effect of choosing an integration method to represent the values of indicators for each city is investigated by comparing the results of two of the most used methods i.e. geometric aggregation and fuzzy logic. The essence of these methods is assigning a weight to each indicator its relative significance. However, both methods resulted in different weights for the same indicator. As a result of this study, the alternation in city ranking resulting from each method was investigated and discussed separately. Generally, each method illustrated different ranking for the selected cities. However, it was observed that within certain functional areas the rank remained unchanged in both integration method. Based on the results of the study, it is recommended utilizing a common platform and method to objectively evaluate cities around the world. The common method should provide policymakers proper tools to evaluate their decisions and investments relative to other cities. Moreover, for smart cities indices, at least 481 different indicators were found, which is an immense number of indicators to be considered, especially for a smart city index. Further works should be devoted to finding mutual indicators representing the index purpose globally and objectively.Keywords: functional domain, urban city index, indicator, smart city
Procedia PDF Downloads 147287 Distribution Routs Redesign through the Vehicle Problem Routing in Havana Distribution Center
Authors: Sonia P. Marrero Duran, Lilian Noya Dominguez, Lisandra Quintana Alvarez, Evert Martinez Perez, Ana Julia Acevedo Urquiaga
Abstract:
Cuban business and economic policy are in the constant update as well as facing a client ever more knowledgeable and demanding. For that reason become fundamental for companies competitiveness through the optimization of its processes and services. One of the Cuban’s pillars, which has been sustained since the triumph of the Cuban Revolution back in 1959, is the free health service to all those who need it. This service is offered without any charge under the concept of preserving human life, but it implied costly management processes and logistics services to be able to supply the necessary medicines to all the units who provide health services. One of the key actors on the medicine supply chain is the Havana Distribution Center (HDC), which is responsible for the delivery of medicines in the province; as well as the acquisition of medicines from national and international producers and its subsequent transport to health care units and pharmacies in time, and with the required quality. This HDC also carries for all distribution centers in the country. Given the eminent need to create an actor in the supply chain that specializes in the medicines supply, the possibility of centralizing this operation in a logistics service provider is analyzed. Based on this decision, pharmacies operate as clients of the logistic service center whose main function is to centralize all logistics operations associated with the medicine supply chain. The HDC is precisely the logistic service provider in Havana and it is the center of this research. In 2017 the pharmacies had affectations in the availability of medicine due to deficiencies in the distribution routes. This is caused by the fact that they are not based on routing studies, besides the long distribution cycle. The distribution routs are fixed, attend only one type of customer and there respond to a territorial location by the municipality. Taking into consideration the above-mentioned problem, the objective of this research is to optimize the routes system in the Havana Distribution Center. To accomplish this objective, the techniques applied were document analysis, random sampling, statistical inference and tools such as Ishikawa diagram and the computerized software’s: ArcGis, Osmand y MapIfnfo. As a result, were analyzed four distribution alternatives; the actual rout, by customer type, by the municipality and the combination of the two last. It was demonstrated that the territorial location alternative does not take full advantage of the transportation capacities or the distance of the trips, which leads to elevated costs breaking whit the current ways of distribution and the currents characteristics of the clients. The principal finding of the investigation was the optimum option distribution rout is the 4th one that is formed by hospitals and the join of pharmacies, stomatology clinics, polyclinics and maternal and elderly homes. This solution breaks the territorial location by the municipality and permits different distribution cycles in dependence of medicine consumption and transport availability.Keywords: computerized geographic software, distribution, distribution routs, vehicle problem routing (VPR)
Procedia PDF Downloads 161286 Analysis of Differentially Expressed Genes in Spontaneously Occurring Canine Melanoma
Authors: Simona Perga, Chiara Beltramo, Floriana Fruscione, Isabella Martini, Federica Cavallo, Federica Riccardo, Paolo Buracco, Selina Iussich, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari, Paola Modesto
Abstract:
Introduction: Human and canine melanoma have common clinical, histologic characteristics making dogs a good model for comparative oncology. The identification of specific genes and a better understanding of the genetic landscape, signaling pathways, and tumor–microenvironmental interactions involved in the cancer onset and progression is essential for the development of therapeutic strategies against this tumor in both species. In the present study, the differential expression of genes in spontaneously occurring canine melanoma and in paired normal tissue was investigated by targeted RNAseq. Material and Methods: Total RNA was extracted from 17 canine malignant melanoma (CMM) samples and from five paired normal tissues stored in RNA-later. In order to capture the greater genetic variability, gene expression analysis was carried out using two panels (Qiagen): Human Immuno-Oncology (HIO) and Mouse-Immuno-Oncology (MIO) and the miSeq platform (Illumina). These kits allow the detection of the expression profile of 990 genes involved in the immune response against tumors in humans and mice. The data were analyzed through the CLCbio Genomics Workbench (Qiagen) software using the Canis lupus familiaris genome as a reference. Data analysis were carried out both comparing the biologic group (tumoral vs. healthy tissues) and comparing neoplastic tissue vs. paired healthy tissue; a Fold Change greater than two and a p-value less than 0.05 were set as the threshold to select interesting genes. Results and Discussion: Using HIO 63, down-regulated genes were detected; 13 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Eighteen genes were up-regulated, 14 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Using the MIO, 35 down regulated-genes were detected; only four of these were down-regulated, also comparing neoplastic sample vs. paired healthy tissue. Twelve genes were up-regulated in both types of analysis. Considering the two kits, the greatest variation in Fold Change was in up-regulated genes. Dogs displayed a greater genetic homology with humans than mice; moreover, the results have shown that the two kits are able to detect different genes. Most of these genes have specific cellular functions or belong to some enzymatic categories; some have already been described to be correlated to human melanoma and confirm the validity of the dog as a model for the study of molecular aspects of human melanoma.Keywords: animal model, canine melanoma, gene expression, spontaneous tumors, targeted RNAseq
Procedia PDF Downloads 199