Search results for: graph ordering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 584

Search results for: graph ordering

14 Reconstruction of Alveolar Bone Defects Using Bone Morphogenetic Protein 2 Mediated Rabbit Dental Pulp Stem Cells Seeded on Nano-Hydroxyapatite/Collagen/Poly(L-Lactide)

Authors: Ling-Ling E., Hong-Chen Liu, Dong-Sheng Wang, Fang Su, Xia Wu, Zhan-Ping Shi, Yan Lv, Jia-Zhu Wang

Abstract:

Objective: The objective of the present study is to evaluate the capacity of a tissue-engineered bone complex of recombinant human bone morphogenetic protein 2 (rhBMP-2) mediated dental pulp stem cells (DPSCs) and nano-hydroxyapatite/collagen/poly(L-lactide)(nHAC/PLA) to reconstruct critical-size alveolar bone defects in New Zealand rabbit. Methods: Autologous DPSCs were isolated from rabbit dental pulp tissue and expanded ex vivo to enrich DPSCs numbers, and then their attachment and differentiation capability were evaluated when cultured on the culture plate or nHAC/PLA. The alveolar bone defects were treated with nHAC/PLA, nHAC/PLA+rhBMP-2, nHAC/PLA+DPSCs, nHAC/PLA+DPSCs+rhBMP-2, and autogenous bone (AB) obtained from iliac bone or were left untreated as a control. X-ray and a polychrome sequential fluorescent labeling were performed post-operatively and the animals were sacrificed 12 weeks after operation for histological observation and histomorphometric analysis. Results: Our results showed that DPSCs expressed STRO-1 and vementin, and favoured osteogenesis and adipogenesis in conditioned media. DPSCs attached and spread well, and retained their osteogenic phenotypes on nHAC/PLA. The rhBMP-2 could significantly increase protein content, alkaline phosphatase (ALP) activity/protein, osteocalcin (OCN) content, and mineral formation of DPSCs cultured on nHAC/PLA. The X-ray graph, the fluorescent, histological observation and histomorphometric analysis showed that the nHAC/PLA+DPSCs+rhBMP-2 tissue-engineered bone complex had an earlier mineralization and more bone formation inside the scaffold than nHAC/PLA, nHAC/PLA+rhBMP-2 and nHAC/PLA+DPSCs, or even autologous bone. Implanted DPSCs contribution to new bone were detected through transfected eGFP genes. Conclutions: Our findings indicated that stem cells existed in adult rabbit dental pulp tissue. The rhBMP-2 promoted osteogenic capability of DPSCs as a potential cell source for periodontal bone regeneration. The nHAC/PLA could serve as a good scaffold for autologous DPSCs seeding, proliferation and differentiation. The tissue-engineered bone complex with nHAC/PLA, rhBMP-2, and autologous DPSCs might be a better alternative to autologous bone for the clinical reconstruction of periodontal bone defects.

Keywords: nano-hydroxyapatite/collagen/poly (L-lactide), dental pulp stem cell, recombinant human bone morphogenetic protein, bone tissue engineering, alveolar bone

Procedia PDF Downloads 400
13 Evaluation of Antimicrobial and Anti-Inflammatory Activity of Doani Sidr Honey and Madecassoside against Propionibacterium Acnes

Authors: Hana Al-Baghaoi, Kumar Shiva Gubbiyappa, Mayuren Candasamy, Kiruthiga Perumal Vijayaraman

Abstract:

Acne is a chronic inflammatory disease of the sebaceous glands characterized by areas of skin with seborrhea, comedones, papules, pustules, nodules, and possibly scarring. Propionibacterium acnes (P. acnes), plays a key role in the pathogenesis of acne. Their colonization and proliferation trigger the host’s inflammatory response leading to the production of pro-inflammatory cytokines such as interleukin-8 (IL-8) and tumour necrosis factor-α (TNF-α). The usage of honey and natural compounds to treat skin ailments has strong support in the current trend of drug discovery. The present study was carried out evaluate antimicrobial and anti-inflammatory potential of Doani Sidr honey and its fractions against P. acnes and to screen madecassoside alone and in combination with fractions of honey. The broth dilution method was used to assess the antibacterial activity. Also, ultra structural changes in cell morphology were studied before and after exposure to Sidr honey using transmission electron microscopy (TEM). The three non-toxic concentrations of the samples were investigated for suppression of cytokines IL 8 and TNF α by testing the cell supernatants in the co-culture of the human peripheral blood mononuclear cells (hPBMCs) heat killed P. acnes using enzyme immunoassay kits (ELISA). Results obtained was evaluated by statistical analysis using Graph Pad Prism 5 software. The Doani Sidr honey and polysaccharide fractions were able to inhibit the growth of P. acnes with a noteworthy minimum inhibitory concentration (MIC) value of 18% (w/v) and 29% (w/v), respectively. The proximity of MIC and MBC values indicates that Doani Sidr honey had bactericidal effect against P. acnes which is confirmed by TEM analysis. TEM images of P. acnes after treatment with Doani Sidr honey showed completely physical membrane damage and lysis of cells; whereas non honey treated cells (control) did not show any damage. In addition, Doani Sidr honey and its fractions significantly inhibited (> 90%) of secretion of pro-inflammatory cytokines like TNF α and IL 8 by hPBMCs pretreated with heat-killed P. acnes. However, no significant inhibition was detected for madecassoside at its highest concentration tested. Our results suggested that Doani Sidr honey possesses both antimicrobial and anti-inflammatory effects against P. acnes and can possibly be used as therapeutic agents for acne. Furthermore, polysaccharide fraction derived from Doani Sidr honey showed potent inhibitory effect toward P. acnes. Hence, we hypothesize that fraction prepared from Sidr honey might be contributing to the antimicrobial and anti-inflammatory activity. Therefore, this polysaccharide fraction of Doani Sidr honey needs to be further explored and characterized for various phytochemicals which are contributing to antimicrobial and anti-inflammatory properties.

Keywords: Doani sidr honey, Propionibacterium acnes, IL-8, TNF alpha

Procedia PDF Downloads 400
12 Structural Balance and Creative Tensions in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).

Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams

Procedia PDF Downloads 79
11 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 286
10 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 119
9 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 278
8 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 207
7 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories

Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari

Abstract:

One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.

Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive

Procedia PDF Downloads 173
6 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 391
5 Political Communication in Twitter Interactions between Government, News Media and Citizens in Mexico

Authors: Jorge Cortés, Alejandra Martínez, Carlos Pérez, Anaid Simón

Abstract:

The presence of government, news media, and general citizenry in social media allows considering interactions between them as a form of political communication (i.e. the public exchange of contradictory discourses about politics). Twitter’s asymmetrical following model (users can follow, mention or reply to other users that do not follow them) could foster alternative democratic practices and have an impact on Mexican political culture, which has been marked by a lack of direct communication channels between these actors. The research aim is to assess Twitter’s role in political communication practices through the analysis of interaction dynamics between government, news media, and citizens by extracting and visualizing data from Twitter’s API to observe general behavior patterns. The hypothesis is that regardless the fact that Twitter’s features enable direct and horizontal interactions between actors, users repeat traditional dynamics of interaction, without taking full advantage of the possibilities of this medium. Through an interdisciplinary team including Communication Strategies, Information Design, and Interaction Systems, the activity on Twitter generated by the controversy over the presence of Uber in Mexico City was analysed; an issue of public interest, involving aspects such as public opinion, economic interests and a legal dimension. This research includes techniques from social network analysis (SNA), a methodological approach focused on the comprehension of the relationships between actors through the visual representation and measurement of network characteristics. The analysis of the Uber event comprised data extraction, data categorization, corpus construction, corpus visualization and analysis. On the recovery stage TAGS, a Google Sheet template, was used to extract tweets that included the hashtags #UberSeQueda and #UberSeVa, posts containing the string Uber and tweets directed to @uber_mx. Using scripts written in Python, the data was filtered, discarding tweets with no interaction (replies, retweets or mentions) and locations outside of México. Considerations regarding bots and the omission of anecdotal posts were also taken into account. The utility of graphs to observe interactions of political communication in general was confirmed by the analysis of visualizations generated with programs such as Gephi and NodeXL. However, some aspects require improvements to obtain more useful visual representations for this type of research. For example, link¬crossings complicates following the direction of an interaction forcing users to manipulate the graph to see it clearly. It was concluded that some practices prevalent in political communication in Mexico are replicated in Twitter. Media actors tend to group together instead of interact with others. The political system tends to tweet as an advertising strategy rather than to generate dialogue. However, some actors were identified as bridges establishing communication between the three spheres, generating a more democratic exercise and taking advantage of Twitter’s possibilities. Although interactions in Twitter could become an alternative to political communication, this potential depends on the intentions of the participants and to what extent they are aiming for collaborative and direct communications. Further research is needed to get a deeper understanding on the political behavior of Twitter users and the possibilities of SNA for its analysis.

Keywords: interaction, political communication, social network analysis, Twitter

Procedia PDF Downloads 221
4 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 339
3 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment

Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali

Abstract:

This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.

Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis

Procedia PDF Downloads 428
2 Musictherapy and Gardentherapy: A Systemic Approach for the Life Quality of the PsychoPhysical Disability

Authors: Adriana De Serio, Donato Forenza

Abstract:

Aims. In this experimental research the Authors present the methodological plan “Musictherapy and Gardentherapy” that they created interconnected with the garden landscape ecosystems and aimed at PsychoPhysical Disability (MusGarPPhyD). In the context of the environmental education aimed at spreading the landscape culture and its values, it’s necessary to develop a solid perception of the environment sustainability to implement a multidimensional approach that pays attention to the conservation and enhancement of gardens and natural environments. The result is an improvement in the life quality also in compliance with the objectives of the European Agenda 2030. The MusGarPPhyD can help professionals such as musictherapists and environmental and landscape researchers strengthen subjects' motivation to learn to deal with the psychophysical discomfort associated with disability and to cope with the distress and the psychological fragility and the loneliness and the social seclusion and to promote productive social relationships. Materials and Methods. The MusGarPPhyD was implemented in multiple spaces. The musictherapy treatments took place first inside residential therapeutic centres and then in the garden landscape ecosystem. Patients: twenty, set in two groups. Weekly-sessions (50’) for three months. Methodological phases: - Phase P1. MusicTherapy treatments for each group in the indoor spaces. - Phase P2. MusicTherapy sessions inside the gardens. After each Phase, P1 and P2: - a Questionnaire for each patient (ten items / liking-indices) was administrated at t0 time, during the treatment and at tn time at the end of the treatment. - Monitoring of patients' behavioral responses through assessment scales, matrix, table and graph system. MusicTherapy methodology: pazient Sonorous-Musical Anamnesis, Musictherapy Assessment Document, Observation Protocols, Bodily-Environmental-Rhythmical-Sonorous-Vocal-Energy production first indoors and then outside, sonorous-musical instruments and edible instruments made by the Author/musictherapist with some foods; Administration of Patient-Environment-Music Index at time to and tn, to estimate the patient’s behavior evolution, Musictherapeutic Advancement Index. Results. The MusGarPPhyD can strengthen the individual sense of identity and improve the psychophysical skills and the resilience to face and to overcome the difficulties caused by the congenital /acquired disability. The multi-sensory perceptions deriving from contact with the plants in the gardens improve the psychological well-being and regulate the physiological parameters such as blood pressure, cardiac and respiratory rhythm, reducing the cholesterol levels. The secretions of the peptide hormones endorphins and the endogenous opioids enkephalins increase and bring a state of patient’s tranquillity and a better mood. The subjects showed a preference for musictherapy treatments within a setting made up of gardens and peculiar landscape systems. This resulted in greater health benefits. Conclusions. The MusGarPPhyD contributes to reduce psychophysical tensions, anxiety, depression and stress, facilitating the connections between the cerebral hemispheres, thus also improving intellectual performances, self-confidence, motor skills and social interactions. Therefore it is necessary to design hospitals, rehabilitation centers, nursing homes, surrounded by gardens. Ecosystems of natural and urban parks and gardens create fascinating skyline and mosaics of landscapes rich in beauty and biodiversity. The MusGarPPhyD is useful for the health management promoting patient’s psychophysical activation, better mood/affective-tone and relastionships and contributing significantly to improving the life quality.

Keywords: musictherapy, gardentherapy, disability, life quality

Procedia PDF Downloads 72
1 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 150