Search results for: graph
38 A Strategy to Oil Production Placement Zones Based on Maximum Closeness
Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes
Abstract:
Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone
Procedia PDF Downloads 32837 Solubility of Carbon Dioxide in Methoxy and Nitrile-Functionalized Ionic Liquids
Authors: D. A. Bruzon, G. Tapang, I. S. Martinez
Abstract:
Global warming and climate change are significant environmental concerns, which require immediate global action in carbon emission mitigation. The capture, sequestration, and conversion of carbon dioxide to other products such as methane or ethanol are ways to control excessive emissions. Ionic liquids have shown great potential among the materials studied as carbon capture solvents and catalysts in the reduction of CO2. In this study, ionic liquids comprising of a methoxy (-OCH3) and cyano (-CN) functionalized imidazolium cation, [MOBMIM] and [CNBMIM] respectively, paired with tris(pentafluoroethyl)trifluorophosphate [FAP] anion were evaluated as effective capture solvents, and organocatalysts in the reduction of CO2. An in-situ electrochemical set-up, which can measure controlled amounts of CO2 both in the gas and in the ionic liquid phase, was used. Initially, reduction potentials of CO2 in the CO2-saturated ionic liquids containing the internal standard cobaltocene were determined using cyclic voltammetry. Chronoamperometric transients were obtained at potentials slightly less negative than the reduction potentials of CO2 in each ionic liquid. The time-dependent current response was measured under a controlled atmosphere. Reduction potentials of CO2 in methoxy and cyano-functionalized [FAP] ionic liquids were observed to occur at ca. -1.0 V (vs. Cc+/Cc), which was significantly lower compared to the non-functionalized analog [PMIM][FAP], with an observed reduction potential of CO2 at -1.6 V (vs. Cc+/Cc). This decrease in the potential required for CO2 reduction in the functionalized ionic liquids shows that the functional groups methoxy and cyano effectively decreased the free energy of formation of the radical anion CO2●⁻, suggesting that these electrolytes may be used as organocatalysts in the reduction of the greenhouse gas. However, upon analyzing the solubility of the gas in each ionic liquid, [PMIM][FAP] showed the highest absorption capacity, at 4.81 mM under saturated conditions, compared to [MOBMIM][FAP] at 1.86 mM, and [CNBMIM][FAP] at 0.76 mM. Also, calculated Henry’s constant determined from the concentration-pressure graph of each functionalized ionic liquid shows that the groups -OCH3 and -CN attached terminal to a C4 alkyl chain do not significantly improve CO2 solubility.Keywords: carbon capture, CO2 reduction, electrochemistry, ionic liquids
Procedia PDF Downloads 39936 Effect of Halo Protection Device on the Aerodynamic Performance of Formula Racecar
Authors: Mark Lin, Periklis Papadopoulos
Abstract:
This paper explores the aerodynamics of the formula racecar when a ‘halo’ driver-protection device is added to the chassis. The halo protection device was introduced at the start of the 2018 racing season as a safety measure against foreign object impacts that a driver may encounter when driving an open-wheel racecar. In the one-year since its introduction, the device has received wide acclaim for protecting the driver on two separate occasions. The benefit of such a safety device certainly cannot be disputed. However, by adding the halo device to a car, it changes the airflow around the vehicle, and most notably, to the engine air-intake and the rear wing. These negative effects in the air supply to the engine, and equally to the downforce created by the rear wing are studied in this paper using numerical technique, and the resulting CFD outputs are presented and discussed. Comparing racecar design prior to and after the introduction of the halo device, it is shown that the design of the air intake and the rear wing has not followed suit since the addition of the halo device. The reduction of engine intake mass flow due to the halo device is computed and presented for various speeds the car may be going. Because of the location of the halo device in relation to the air intake, airflow is directed away from the engine, making the engine perform less than optimal. The reduction is quantified in this paper to show the correspondence to reduce the engine output when compared to a similar car without the halo device. This paper shows that through aerodynamic arguments, the engine in a halo car will not receive unobstructed, clean airflow that a non-halo car does. Another negative effect is on the downforce created by the rear wing. Because the amount of downforce created by the rear wing is influenced by every component that comes before it, when a halo device is added upstream to the rear wing, airflow is obstructed, and less is available for making downforce. This reduction in downforce is especially dramatic as the speed is increased. This paper presents a graph of downforce over a range of speeds for a car with and without the halo device. Acknowledging that although driver safety is paramount, the negative effect of this safety device on the performance of the car should still be well understood so that any possible redesign to mitigate these negative effects can be taken into account in next year’s rules regulation.Keywords: automotive aerodynamics, halo device, downforce. engine intake
Procedia PDF Downloads 10835 The Effectiveness of Probiotics in the Treatment of Minimal Hepatic Encephalopathy Among Patients with Cirrhosis: An Expanded Meta-Analysis
Authors: Erwin Geroleo, Higinio Mappala
Abstract:
Introduction Overt Hepatic Encephalopathy (OHE) is the most dreaded outcome of liver cirrhosis. Aside from the triggering factors which are already known to precipitate OHE, there is growing evidence that an altered gut microbiota profile (dysbiosis) can also trigger OHE. MHE is the mildest form of hepatic encephalopathy(HE), affecting about one-third of patients with cirrhosis, and close 80% of patients with cirrhosis and manifests as abnormalities in central nervous system function. Since these symptoms are subclinical most patients are not being treated to prevent OHE. The gut microbiota have been evaluated by several studies as a therapeutic option for MHE, especially in decreasing the levels of ammonia, thus preventing progression to OHE Objectives This study aims to evaluate the efficacy of probiotics in terms of reduction of ammonia levels in patient with minimal hepatic encephalopathies and to determine if Probiotics has role in the prevention of progression to overt hepatic encephalopathy in adult patients with minimal hepatic encephalopathy (MHE) Methods and Analysis The literature search strategy was restricted to human studies in adults subjects from 2004 to 2022. The Jadad Score Calculation was utilized in the assessment of the final studies included in this study. Eight (8) studies were included. Cochrane’s Revman Web, the Fixed Effects model and the Ztest were all used in the overall analysis of the outcomes. A p value of less than 0.0005 was statistically significant. Results. These results show that Probiotics significantly lowers the level of Ammonia in Cirrhotic patients with OHE. It also shows that the use of Probiotics significantly prevents the progression of MHE to OHE. The overall risk of bias graph indicates low risk of publication bias among the studies included in the meta-analysis. Main findings This research found that plasma ammonia concentration was lower among participants treated with probiotics (p<0.00001).) Ammonia level of the probiotics group is lower by 13.96 μmol/ on the average. Overall risk of developing overt hepatic encephalopathy in the probiotics group is shown to be decreased by 15% as compared to the placebo group Conclusion The analysis showed that compared with placebo, probiotics can decrease serum ammonia, may improve MHE and may prevent OHE.Keywords: minimal hepatic encephalopathy, probiotics, liver cirrhosis, overt hepatic encephalopathy
Procedia PDF Downloads 4434 Impacts on Marine Ecosystems Using a Multilayer Network Approach
Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade
Abstract:
Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management
Procedia PDF Downloads 11233 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks
Authors: Salman Naseer, Usman Zafar, Iqra Zafar
Abstract:
This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.Keywords: AODV, DSDV, DSR, Adhoc network
Procedia PDF Downloads 28532 The On-Board Critical Message Transmission Design for Navigation Satellite Delay/Disruption Tolerant Network
Authors: Ji-yang Yu, Dan Huang, Guo-ping Feng, Xin Li, Lu-yuan Wang
Abstract:
The navigation satellite network, especially the Beidou MEO Constellation, can relay data effectively with wide coverage and is applied in navigation, detection, and position widely. But the constellation has not been completed, and the amount of satellites on-board is not enough to cover the earth, which makes the data-relay disrupted or delayed in the transition process. The data-relay function needs to tolerant the delay or disruption in some extension, which make the Beidou MEO Constellation a delay/disruption-tolerant network (DTN). The traditional DTN designs mainly employ the relay table as the basic of data path schedule computing. But in practical application, especially in critical condition, such as the war-time or the infliction heavy losses on the constellation, parts of the nodes may become invalid, then the traditional DTN design could be useless. Furthermore, when transmitting the critical message in the navigation system, the maximum priority strategy is used, but the nodes still inquiry the relay table to design the path, which makes the delay more than minutes. Under this circumstances, it needs a function which could compute the optimum data path on-board in real-time according to the constellation states. The on-board critical message transmission design for navigation satellite delay/disruption-tolerant network (DTN) is proposed, according to the characteristics of navigation satellite network. With the real-time computation of parameters in the network link, the least-delay transition path is deduced to retransmit the critical message in urgent conditions. First, the DTN model for constellation is established based on the time-varying matrix (TVM) instead of the time-varying graph (TVG); then, the least transition delay data path is deduced with the parameters of the current node; at last, the critical message transits to the next best node. For the on-board real-time computing, the time delay and misjudges of constellation states in ground stations are eliminated, and the residual information channel for each node can be used flexibly. Compare with the minute’s delay of traditional DTN; the proposed transmits the critical message in seconds, which improves the re-transition efficiency. The hardware is implemented in FPGA based on the proposed model, and the tests prove the validity.Keywords: critical message, DTN, navigation satellite, on-board, real-time
Procedia PDF Downloads 34031 Predicting Child Attachment Style Based on Positive and Safe Parenting Components and Mediating Maternal Attachment Style in Children With ADHD
Authors: Alireza Monzavi Chaleshtari, Maryam Aliakbari
Abstract:
Objective: The aim of this study was to investigate the prediction of child attachment style based on a positive and safe combination parenting method mediated by maternal attachment styles in children with attention deficit hyperactivity disorder. Method: The design of the present study was descriptive of correlation and structural equations and applied in terms of purpose. The population of this study includes all children with attention deficit hyperactivity disorder living in Chaharmahal and Bakhtiari province and their mothers. The sample size of the above study includes 165children with attention deficit hyperactivity disorder in Chaharmahal and Bakhtiari province with their mothers, who were selected by purposive sampling method based on the inclusion criteria. The obtained data were analyzed in two sections of descriptive and inferential statistics. In the descriptive statistics section, statistical indices of mean, standard deviation, frequency distribution table and graph were used. In the inferential section, according to the nature of the hypotheses and objectives of the research, the data were analyzed using Pearson correlation coefficient tests, Bootstrap test and structural equation model. findings:The results of structural equation modeling showed that the research models fit and showed a positive and safe combination parenting style mediated by the mother attachment style has an indirect effect on the child attachment style. Also, a positive and safe combined parenting style has a direct relationship with child attachment style, and She has a mother attachment style. Conclusion:The results and findings of the present study show that there is a significant relationship between positive and safe combination parenting methods and attachment styles of children with attention deficit hyperactivity disorder with maternal attachment style mediation. Therefore, it can be expected that parents using a positive and safe combination232 parenting method can effectively lead to secure attachment in children with attention deficit hyperactivity disorder.Keywords: child attachment style, positive and safe parenting, maternal attachment style, ADHD
Procedia PDF Downloads 6530 Structuring Paraphrases: The Impact Sentence Complexity Has on Key Leader Engagements
Authors: Meaghan Bowman
Abstract:
Soldiers are taught about the importance of effective communication with repetition of the phrase, “Communication is key.” They receive training in preparing for, and carrying out, interactions between foreign and domestic leaders to gain crucial information about a mission. These interactions are known as Key Leader Engagements (KLEs). For the training of KLEs, doctrine mandates the skills needed to conduct these “engagements” such as how to: behave appropriately, identify key leaders, and employ effective strategies. Army officers in training learn how to confront leaders, what information to gain, and how to ask questions respectfully. Unfortunately, soldiers rarely learn how to formulate questions optimally. Since less complex questions are easier to understand, we hypothesize that semantic complexity affects content understanding, and that age and education levels may have an effect on one’s ability to form paraphrases and judge their quality. In this study, we looked at paraphrases of queries as well as judgments of both the paraphrases’ naturalness and their semantic similarity to the query. Queries were divided into three complexity categories based on the number of relations (the first number) and the number of knowledge graph edges (the second number). Two crowd-sourced tasks were completed by Amazon volunteer participants, also known as turkers, to answer the research questions: (i) Are more complex queries harder to paraphrase and judge and (ii) Do age and education level affect the ability to understand complex queries. We ran statistical tests as follows: MANOVA for query understanding and two-way ANOVA to understand the relationship between query complexity and education and age. A probe of the number of given-level queries selected for paraphrasing by crowd-sourced workers in seven age ranges yielded promising results. We found significant evidence that age plays a role and marginally significant evidence that education level plays a role. These preliminary tests, with output p-values of 0.0002 and 0.068, respectively, suggest the importance of content understanding in a communication skill set. This basic ability to communicate, which may differ by age and education, permits reproduction and quality assessment and is crucial in training soldiers for effective participation in KLEs.Keywords: engagement, key leader, paraphrasing, query complexity, understanding
Procedia PDF Downloads 16029 An Assessment of the Impacts of Agro-Ecological Practices towards the Improvement of Crop Health and Yield Capacity: A Case of Mopani District, Limpopo, South Africa
Authors: Tshilidzi C. Manyanya, Nthaduleni S. Nethengwe, Edmore Kori
Abstract:
The UNFCCC, FAO, GCF, IPCC and other global structures advocate for agro-ecology do address food security and sovereignty. However, most of the expected outcomes concerning agro-ecological were not empirically tested for universal application. Agro-ecology is theorised to increase crop health over ago-ecological farms and decrease over conventional farms. Increased crop health means increased carbon sequestration and thus less CO2 in the atmosphere. This is in line with the view that global warming is anthropogenically enhanced through GHG emissions. Agro-ecology mainly affects crop health, soil carbon content and yield on the cultivated land. Economic sustainability is directly related to yield capacity, which is theorized to increase by 3-10% in a space of 3 - 10 years as a result of agro-ecological implementation. This study aimed to empirically assess the practicality and validity of these assumptions. The study utilized mainly GIS and RS techniques to assess the effectiveness of agro-ecology in crop health improvement from satellite images. The assessment involved a longitudinal study (2013 – 2015) assessing the changes that occur after a farm retrofits from conventional agriculture to agro-ecology. The assumptions guided the objectives of the study. For each objective, an agro-ecological farm was compared with a conventional farm in the same climatic conditional occupying the same general location. Crop health was assessed using satellite images analysed through ArcGIS and Erdas. This entailed the production of NDVI and Re-classified outputs of the farm area. The NDVI ranges of the entire period of study were thus compared in a stacked histogram for each farm to assess for trends. Yield capacity was calculated based on the production records acquired from the farmers and plotted in a stacked bar graph as percentages of a total for each farm. The results of the study showed decreasing crop health trends over 80% of the conventional farms and an increase over 80% of the organic farms. Yield capacity showed similar patterns to those of crop health. The study thus showed that agro-ecology is an effective strategy for crop-health improvement and yield increase.Keywords: agro-ecosystem, conventional farm, dialectical, sustainability
Procedia PDF Downloads 21628 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective
Authors: Noury Bakrim
Abstract:
Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence
Procedia PDF Downloads 12527 A Patient Passport Application for Adults with Cystic Fibrosis
Authors: Tamara Vagg, Cathy Shortt, Claire Hickey, Joseph A. Eustace, Barry J. Plant, Sabin Tabirca
Abstract:
Introduction: Paper-based patient passports have been used advantageously for older patients, patients with diabetes, and patients with learning difficulties. However, these passports can experience issues with data security, patients forgetting to bring the passport, patients being over encumbered, and uncertainty with who is responsible for entering and managing data in this passport. These issues could be resolved by transferring the paper-based system to a convenient platform such as a smartphone application (app). Background: Life expectancy for some Cystic Fibrosis (CF) patients are rising and as such new complications and procedures are predicted. Subsequently, there is a need for education and management interventions that can benefit CF adults. This research proposes a CF patient passport to record basic medical information through a smartphone app which will allow CF adults access to their basic medical information. Aim: To provide CF patients with their basic medical information via mobile multimedia so that they can receive care when traveling abroad or between CF centres. Moreover, by recording their basic medical information, CF patients may become more aware of their own condition and more active in their health care. Methods: This app is designed by a CF multidisciplinary team to be a lightweight reflection of a hospital patient file. The passport app is created using PhoneGap so that it can be deployed for both Android and iOS devices. Data entered into the app is encrypted and stored locally only. The app is password protected and includes the ability to set reminders and a graph to visualise weight and lung function over time. The app is introduced to seven participants as part of a stress test. The participants are asked to test the performance and usability of the app and report any issues identified. Results: Feedback and suggestions received via this testing include the ability to reorder the list of clinical appointments via date, an open format of recording dates (in the event specifics are unknown), and a drop down menu for data which is difficult to enter (such as bugs found in mucus). The app is found to be usable and accessible and is now being prepared for a pilot study with adult CF patients. Conclusions: It is anticipated that such an app will be beneficial to CF adult patients when travelling abroad and between CF centres.Keywords: Cystic Fibrosis, digital patient passport, mHealth, self management
Procedia PDF Downloads 25326 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description
Authors: Branimir Jurun, Elza Jurun
Abstract:
The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement
Procedia PDF Downloads 19625 Fraud in the Higher Educational Institutions in Assam, India: Issues and Challenges
Authors: Kalidas Sarma
Abstract:
Fraud is a social problem changing with social change and it has a regional and global impact. Introduction of private domain in higher education along with public institutions has led to commercialization of higher education which encourages unprecedented mushrooming of private institutions resulting in fraudulent activities in higher educational institutions in Assam, India. Presently, fraud has been noticed in in-service promotion, fake entry qualification by teachers in different levels of work-place by using fake master degrees, master of philosophy and doctor of philosophy degree certificates. The aim and objective of the study are to identify grey areas in maintenance of quality in higher educational institutions in Assam and also to draw the contour for planning and implementation. This study is based on both primary and secondary data collected through questionnaire and seeking information through Right to Information Act 2005. In Assam, there are 301 undergraduate and graduate colleges distributed in 27 (Twenty seven) administrative districts with 11000 (Eleven thousand) college teachers. Total 421 (Four hundred twenty one) college teachers from the 14 respondent colleges have been taken for analysis. Data collected has been analyzed by using 'Hypertext Pre-processor' (PhP) application with My Sequel Structure Query Language (MySQL) and Google Map Application Programming Interface (APIs). Graph has been generated by using open source tool Chart.js. Spatial distribution maps have been generated with the help of geo-references of the colleges. The result shows: (i) the violation of University Grants Commission's (UGCs) Regulation for the awards of M. Phil/Ph.D. clearly exhibits. (ii) There is a gap between apex regulatory bodies of higher education at national and as well as state level to check fraud. (iii) Mala fide 'No Objection Certificate' (NOC) issued by the Government of Assam have played pivotal role in the occurrence of fraudulent practices in higher educational institutions of Assam. (iv) Violation of verdict of the Hon'ble Supreme Court of India regarding territorial jurisdiction of Universities for the awards of Ph.D. and M. Phil degrees in distance mode/study centre is also a responsible factor for the spread of these academic frauds in Assam and other states. The challenges and mitigation of these issues have been discussed.Keywords: Assam, fraud, higher education, mitigation
Procedia PDF Downloads 16724 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 6823 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger
Authors: Hany Elsaid Fawaz Abdallah
Abstract:
This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations
Procedia PDF Downloads 8622 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 13421 Layer-By-Layer Deposition of Poly (Amidoamine) and Poly (Acrylic Acid) on Grafted-Polylactide Nonwoven with Different Surface Charge
Authors: Sima Shakoorjavan, Mahdieh Eskafi, Dawid Stawski, Somaye Akbari
Abstract:
In this study, poly (amidoamine) dendritic material (PAMAM) and poly (acrylic acid) (PAA) as polycation and polyanion were deposited on surface charged polylactide (PLA) nonwoven to study the relationship of dye absorption capacity of layered-PLA with the number of deposited layers. To produce negatively charged-PLA, acrylic acid (AA) was grafted on the PLA surface (PLA-g-AA) through a chemical redox reaction with the strong oxidizing agent. Spectroscopy analysis, water contact measurement, and FTIR-ATR analysis confirm the successful grafting of AA on the PLA surface through the chemical redox reaction method. In detail, an increase in dye absorption percentage by 19% and immediate absorption of water droplets ensured hydrophilicity of PLA-g-AA surface; and the presence of new carbonyl bond at 1530 cm-¹ and a wide peak of hydroxyl between 3680-3130 cm-¹ confirm AA grafting. In addition, PLA as linear polyester can undergo aminolysis, which is the cleavage of ester bonds and replacement with amid bonds when exposed to an aminolysis agent. Therefore, to produce positively charged PLA, PAMAM as amine-terminated dendritic material was introduced to PLA molecular chains at different conditions; (1) at 60 C for 0.5, 1, 1.5, 2 hours of aminolysis and (2) at room temperature (RT) for 1, 2, 3, and 4 hours of aminolysis. Weight changes and spectrophotometer measurements showed a maximum in weight gain graph and K/S value curve indicating the highest PAMAM attachment at 60 C for 1 hour and RT for 2 hours which is considered as an optimum condition. Also, the emerging new peak around 1650 cm-1 corresponding to N-H bending vibration and double wide peak at around 3670-3170 cm-1 corresponding to N-H stretching vibration confirm PAMAM attachment in selected optimum condition. In the following, regarding the initial surface charge of grafted-PLA, lbl deposition was performed and started with PAA or PAMAM. FTIR-ATR results confirm chemical changes in samples due to deposition of the first layer (PAA or PAMAM). Generally, spectroscopy analysis indicated that an increase in layer number costed dye absorption capacity. It can be due to the partial deposition of a new layer on the previously deposited layer; therefore, the available PAMAM at the first layer is more than the third layer. In detail, in the case of layer-PLA starting lbl with negatively charged, having PAMAM as the first top layer (PLA-g-AA/PAMAM) showed the highest dye absorption of both cationic and anionic model dye.Keywords: surface modification, layer-by-layer technique, dendritic materials, PAMAM, dye absorption capacity, PLA nonwoven
Procedia PDF Downloads 8320 Cluster-Based Exploration of System Readiness Levels: Mathematical Properties of Interfaces
Authors: Justin Fu, Thomas Mazzuchi, Shahram Sarkani
Abstract:
A key factor in technological immaturity in defense weapons acquisition is lack of understanding critical integrations at the subsystem and component level. To address this shortfall, recent research in integration readiness level (IRL) combines with technology readiness level (TRL) to form a system readiness level (SRL). SRL can be enriched with more robust quantitative methods to provide the program manager a useful tool prior to committing to major weapons acquisition programs. This research harnesses previous mathematical models based on graph theory, Petri nets, and tropical algebra and proposes a modification of the desirable SRL mathematical properties such that a tightly integrated (multitude of interfaces) subsystem can display a lower SRL than an inherently less coupled subsystem. The synthesis of these methods informs an improved decision tool for the program manager to commit to expensive technology development. This research ties the separately developed manufacturing readiness level (MRL) into the network representation of the system and addresses shortfalls in previous frameworks, including the lack of integration weighting and the over-importance of a single extremely immature component. Tropical algebra (based on the minimum of a set of TRLs or IRLs) allows one low IRL or TRL value to diminish the SRL of the entire system, which may not be reflective of actuality if that component is not critical or tightly coupled. Integration connections can be weighted according to importance and readiness levels are modified to be a cardinal scale (based on an analytic hierarchy process). Integration arcs’ importance are dependent on the connected nodes and the additional integrations arcs connected to those nodes. Lack of integration is not represented by zero, but by a perfect integration maturity value. Naturally, the importance (or weight) of such an arc would be zero. To further explore the impact of grouping subsystems, a multi-objective genetic algorithm is then used to find various clusters or communities that can be optimized for the most representative subsystem SRL. This novel calculation is then benchmarked through simulation and using past defense acquisition program data, focusing on the newly introduced Middle Tier of Acquisition (rapidly field prototypes). The model remains a relatively simple, accessible tool, but at higher fidelity and validated with past data for the program manager to decide major defense acquisition program milestones.Keywords: readiness, maturity, system, integration
Procedia PDF Downloads 9219 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'
Authors: Anthony Coogan
Abstract:
Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle
Procedia PDF Downloads 19918 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 9717 Effect of Ease of Doing Business to Economic Growth among Selected Countries in Asia
Authors: Teodorica G. Ani
Abstract:
Economic activity requires an encouraging regulatory environment and effective rules that are transparent and accessible to all. The World Bank has been publishing the annual Doing Business reports since 2004 to investigate the scope and manner of regulations that enhance business activity and those that constrain it. A streamlined business environment supporting the development of competitive small and medium enterprises (SMEs) may expand employment opportunities and improve the living conditions of low income households. Asia has emerged as one of the most attractive markets in the world. Economies in East Asia and the Pacific were among the most active in making it easier for local firms to do business. The study aimed to describe the ease of doing business and its effect to economic growth among selected economies in Asia for the year 2014. The study covered 29 economies in East Asia, Southeast Asia, South Asia and Middle Asia. Ease of doing business is measured by the Doing Business indicators (DBI) of the World Bank. The indicators cover ten aspects of the ease of doing business such as starting a business, dealing with construction permits, getting electricity, registering property, getting credit, protecting investors, paying taxes, trading across borders, enforcing contracts and resolving insolvency. In the study, Gross Domestic Product (GDP) was used as the proxy variable for economic growth. Descriptive research was the research design used. Graphical analysis was used to describe the income and doing business among selected economies. In addition, multiple regression was used to determine the effect of doing business to economic growth. The study presented the income among selected economies. The graph showed China has the highest income while Maldives produces the lowest and that observation were supported by gathered literatures. The study also presented the status of the ten indicators of doing business among selected economies. The graphs showed varying trends on how easy to start a business, deal with construction permits and to register property. Starting a business is easiest in Singapore followed by Hong Kong. The study found out that the variations in ease of doing business is explained by starting a business, dealing with construction permits and registering property. Moreover, an explanation of the regression result implies that a day increase in the average number of days it takes to complete a procedure will decrease the value of GDP in general. The research proposed inputs to policy which may increase the awareness of local government units of different economies on the simplification of the policies of the different components used in measuring doing business.Keywords: doing business, economic growth, gross domestic product, Asia
Procedia PDF Downloads 37916 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots
Authors: Mrinalini Ranjan, Sudheesh Chethil
Abstract:
Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots
Procedia PDF Downloads 17415 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 12714 Reconstruction of Alveolar Bone Defects Using Bone Morphogenetic Protein 2 Mediated Rabbit Dental Pulp Stem Cells Seeded on Nano-Hydroxyapatite/Collagen/Poly(L-Lactide)
Authors: Ling-Ling E., Hong-Chen Liu, Dong-Sheng Wang, Fang Su, Xia Wu, Zhan-Ping Shi, Yan Lv, Jia-Zhu Wang
Abstract:
Objective: The objective of the present study is to evaluate the capacity of a tissue-engineered bone complex of recombinant human bone morphogenetic protein 2 (rhBMP-2) mediated dental pulp stem cells (DPSCs) and nano-hydroxyapatite/collagen/poly(L-lactide)(nHAC/PLA) to reconstruct critical-size alveolar bone defects in New Zealand rabbit. Methods: Autologous DPSCs were isolated from rabbit dental pulp tissue and expanded ex vivo to enrich DPSCs numbers, and then their attachment and differentiation capability were evaluated when cultured on the culture plate or nHAC/PLA. The alveolar bone defects were treated with nHAC/PLA, nHAC/PLA+rhBMP-2, nHAC/PLA+DPSCs, nHAC/PLA+DPSCs+rhBMP-2, and autogenous bone (AB) obtained from iliac bone or were left untreated as a control. X-ray and a polychrome sequential fluorescent labeling were performed post-operatively and the animals were sacrificed 12 weeks after operation for histological observation and histomorphometric analysis. Results: Our results showed that DPSCs expressed STRO-1 and vementin, and favoured osteogenesis and adipogenesis in conditioned media. DPSCs attached and spread well, and retained their osteogenic phenotypes on nHAC/PLA. The rhBMP-2 could significantly increase protein content, alkaline phosphatase (ALP) activity/protein, osteocalcin (OCN) content, and mineral formation of DPSCs cultured on nHAC/PLA. The X-ray graph, the fluorescent, histological observation and histomorphometric analysis showed that the nHAC/PLA+DPSCs+rhBMP-2 tissue-engineered bone complex had an earlier mineralization and more bone formation inside the scaffold than nHAC/PLA, nHAC/PLA+rhBMP-2 and nHAC/PLA+DPSCs, or even autologous bone. Implanted DPSCs contribution to new bone were detected through transfected eGFP genes. Conclutions: Our findings indicated that stem cells existed in adult rabbit dental pulp tissue. The rhBMP-2 promoted osteogenic capability of DPSCs as a potential cell source for periodontal bone regeneration. The nHAC/PLA could serve as a good scaffold for autologous DPSCs seeding, proliferation and differentiation. The tissue-engineered bone complex with nHAC/PLA, rhBMP-2, and autologous DPSCs might be a better alternative to autologous bone for the clinical reconstruction of periodontal bone defects.Keywords: nano-hydroxyapatite/collagen/poly (L-lactide), dental pulp stem cell, recombinant human bone morphogenetic protein, bone tissue engineering, alveolar bone
Procedia PDF Downloads 39713 Evaluation of Antimicrobial and Anti-Inflammatory Activity of Doani Sidr Honey and Madecassoside against Propionibacterium Acnes
Authors: Hana Al-Baghaoi, Kumar Shiva Gubbiyappa, Mayuren Candasamy, Kiruthiga Perumal Vijayaraman
Abstract:
Acne is a chronic inflammatory disease of the sebaceous glands characterized by areas of skin with seborrhea, comedones, papules, pustules, nodules, and possibly scarring. Propionibacterium acnes (P. acnes), plays a key role in the pathogenesis of acne. Their colonization and proliferation trigger the host’s inflammatory response leading to the production of pro-inflammatory cytokines such as interleukin-8 (IL-8) and tumour necrosis factor-α (TNF-α). The usage of honey and natural compounds to treat skin ailments has strong support in the current trend of drug discovery. The present study was carried out evaluate antimicrobial and anti-inflammatory potential of Doani Sidr honey and its fractions against P. acnes and to screen madecassoside alone and in combination with fractions of honey. The broth dilution method was used to assess the antibacterial activity. Also, ultra structural changes in cell morphology were studied before and after exposure to Sidr honey using transmission electron microscopy (TEM). The three non-toxic concentrations of the samples were investigated for suppression of cytokines IL 8 and TNF α by testing the cell supernatants in the co-culture of the human peripheral blood mononuclear cells (hPBMCs) heat killed P. acnes using enzyme immunoassay kits (ELISA). Results obtained was evaluated by statistical analysis using Graph Pad Prism 5 software. The Doani Sidr honey and polysaccharide fractions were able to inhibit the growth of P. acnes with a noteworthy minimum inhibitory concentration (MIC) value of 18% (w/v) and 29% (w/v), respectively. The proximity of MIC and MBC values indicates that Doani Sidr honey had bactericidal effect against P. acnes which is confirmed by TEM analysis. TEM images of P. acnes after treatment with Doani Sidr honey showed completely physical membrane damage and lysis of cells; whereas non honey treated cells (control) did not show any damage. In addition, Doani Sidr honey and its fractions significantly inhibited (> 90%) of secretion of pro-inflammatory cytokines like TNF α and IL 8 by hPBMCs pretreated with heat-killed P. acnes. However, no significant inhibition was detected for madecassoside at its highest concentration tested. Our results suggested that Doani Sidr honey possesses both antimicrobial and anti-inflammatory effects against P. acnes and can possibly be used as therapeutic agents for acne. Furthermore, polysaccharide fraction derived from Doani Sidr honey showed potent inhibitory effect toward P. acnes. Hence, we hypothesize that fraction prepared from Sidr honey might be contributing to the antimicrobial and anti-inflammatory activity. Therefore, this polysaccharide fraction of Doani Sidr honey needs to be further explored and characterized for various phytochemicals which are contributing to antimicrobial and anti-inflammatory properties.Keywords: Doani sidr honey, Propionibacterium acnes, IL-8, TNF alpha
Procedia PDF Downloads 39912 Structural Balance and Creative Tensions in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams
Procedia PDF Downloads 7911 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 28610 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 1199 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism
Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii
Abstract:
This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve
Procedia PDF Downloads 278