Search results for: supramolecular structure
1032 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 2941031 Beyond Voluntary Corporate Social Responsibility: Examining the Impact of the New Mandatory Community Development Agreement in the Mining Sector of Sierra Leone
Authors: Wusu Conteh
Abstract:
Since the 1990s, neo-liberalization has become a global agenda. The free market ushered in an unprecedented drive by Multinational Corporations (MNCs) to secure mineral rights in resource-rich countries. Several governments in the Global South implemented a liberalized mining policy with support from the International Financial Institutions (IFIs). MNCs have maintained that voluntary Corporate Social Responsibility (CSR) has engendered socio-economic development in mining-affected communities. However, most resource-rich countries are struggling to transform the resources into sustainable socio-economic development. They are trapped in what has been widely described as the ‘resource curse.’ In an attempt to address this resource conundrum, the African Mining Vision (AMV) of 2009 developed a model on resource governance. The advent of the AMV has engendered the introduction of mandatory community development agreement (CDA) into the legal framework of many countries in Africa. In 2009, Sierra Leone enacted the Mines and Minerals Act that obligates mining companies to invest in Primary Host Communities. The study employs interviews and field observation techniques to explicate the dynamics of the CDA program. A total of 25 respondents -government officials, NGOs/CSOs and community stakeholders were interviewed. The study focuses on a case study of the Sierra Rutile CDA program in Sierra Leone. Extant scholarly works have extensively explored the resource curse and voluntary CSR. There are limited studies to uncover the mandatory CDA and its impact on socio-economic development in mining-affected communities. Thus, the purpose of this study is to explicate the impact of the CDA in Sierra Leone. Using the theory of change helps to understand how the availability of mandatory funds can empower communities to take an active part in decision making related to the development of the communities. The results show that the CDA has engendered a predictable fund for community development. It has also empowered ordinary members of the community to determine the development program. However, the CDA has created a new ground for contestations between the pre-existing local governance structure (traditional authority) and the newly created community development committee (CDC) that is headed by an ordinary member of the community.Keywords: community development agreement, impact, mandatory, participation
Procedia PDF Downloads 1231030 The Impact of External Technology Acquisition and Exploitation on Firms' Process Innovation Performance
Authors: Thammanoon Charmjuree, Yuosre F. Badir, Umar Safdar
Abstract:
There is a consensus among innovation scholars that knowledge is a vital antecedent for firm’s innovation; e.g., process innovation. Recently, there has been an increasing amount of attention to more open approaches to innovation. This open model emphasizes the use of purposive flows of knowledge across the organization boundaries. Firms adopt open innovation strategy to improve their innovation performance by bringing knowledge into the organization (inbound open innovation) to accelerate internal innovation or transferring knowledge outside (outbound open innovation) to expand the markets for external use of innovation. Reviewing open innovation research reveals the following. First, the majority of existing studies have focused on inbound open innovation and less on outbound open innovation. Second, limited research has considered the possible interaction between both and how this interaction may impact the firm’s innovation performance. Third, scholars have focused mainly on the impact of open innovation strategy on product innovation and less on process innovation. Therefore, our knowledge of the relationship between firms’ inbound and outbound open innovation and how these two impact process innovation is still limited. This study focuses on the firm’s external technology acquisition (ETA) and external technology exploitation (ETE) and the firm’s process innovation performance. The ETA represents inbound openness in which firms rely on the acquisition and absorption of external technologies to complement their technology portfolios. The ETE, on the other hand, refers to commercializing technology assets exclusively or in addition to their internal application. This study hypothesized that both ETA and ETE have a positive relationship with process innovation performance and that ETE fully mediates the relationship between ETA and process innovation performance, i.e., ETA has a positive impact on ETE, and turn, ETE has a positive impact on process innovation performance. This study empirically explored these hypotheses in software development firms in Thailand. These firms were randomly selected from a list of Software firms registered with the Department of Business Development, Ministry of Commerce of Thailand. The questionnaires were sent to 1689 firms. After follow-ups and periodic reminders, we obtained 329 (19.48%) completed usable questionnaires. The structure question modeling (SEM) has been used to analyze the data. An analysis of the outcome of 329 firms provides support for our three hypotheses: First, the firm’s ETA has a positive impact on its process innovation performance. Second, the firm’s ETA has a positive impact its ETE. Third, the firm’s ETE fully mediates the relationship between the firm’s ETA and its process innovation performance. This study fills up the gap in open innovation literature by examining the relationship between inbound (ETA) and outbound (ETE) open innovation and suggest that in order to benefits from the promises of openness, firms must engage in both. The study went one step further by explaining the mechanism through which ETA influence process innovation performance.Keywords: process innovation performance, external technology acquisition, external technology exploitation, open innovation
Procedia PDF Downloads 2021029 Carotenoid Bioaccessibility: Effects of Food Matrix and Excipient Foods
Authors: Birgul Hizlar, Sibel Karakaya
Abstract:
Recently, increasing attention has been given to carotenoid bioaccessibility and bioavailability in the field of nutrition research. As a consequence of their lipophilic nature and their specific localization in plant-based tissues, carotenoid bioaccessibility and bioavailability is generally quite low in raw fruits and vegetables, since carotenoids need to be released from the cellular matrix and incorporated in the lipid fraction during digestion before being absorbed. Today’s approach related to improving the bioaccessibility is to design food matrix. Recently, the newest approach, excipient food, has been introduced to improve the bioavailability of orally administered bioactive compounds. The main idea is combining food and another food (the excipient food) whose composition and/or structure is specifically designed for improving health benefits. In this study, effects of food processing, food matrix and the addition of excipient foods on the carotenoid bioaccessibility of carrots were determined. Different excipient foods (olive oil, lemon juice and whey curd) and different food matrices (grating, boiling and mashing) were used. Total carotenoid contents of the grated, boiled and mashed carrots were 57.23, 51.11 and 62.10 μg/g respectively. No significant differences among these values indicated that these treatments had no effect on the release of carotenoids from the food matrix. Contrary to, changes in the food matrix, especially mashing caused significant increase in the carotenoid bioaccessibility. Although the carotenoid bioaccessibility was 10.76% in grated carrots, this value was 18.19% in mashed carrots (p<0.05). Addition of olive oil and lemon juice as excipients into the grated carrots caused 1.23 times and 1.67 times increase in the carotenoid content and the carotenoid bioaccessibility respectively. However, addition of the excipient foods in the boiled carrot samples did not influence the release of carotenoid from the food matrix. Whereas, up to 1.9 fold increase in the carotenoid bioaccessibility was determined by the addition of the excipient foods into the boiled carrots. The bioaccessibility increased from 14.20% to 27.12% by the addition of olive oil, lemon juice and whey curd. The highest carotenoid content among mashed carrots was found in the mashed carrots incorporated with olive oil and lemon juice. This combination also caused a significant increase in the carotenoid bioaccessibility from 18.19% to 29.94% (p<0.05). When compared the results related with the effect of the treatments on the carotenoid bioaccessibility, mashed carrots containing olive oil, lemon juice and whey curd had the highest carotenoid bioaccessibility. The increase in the bioaccessibility was approximately 81% when compared to grated and mashed samples containing olive oil, lemon juice and whey curd. In conclusion, these results demonstrated that the food matrix and addition of the excipient foods had a significant effect on the carotenoid content and the carotenoid bioaccessibility.Keywords: carrot, carotenoids, excipient foods, food matrix
Procedia PDF Downloads 4561028 Entropy in a Field of Emergence in an Aspect of Linguo-Culture
Authors: Nurvadi Albekov
Abstract:
Communicative situation is a basis, which designates potential models of ‘constructed forms’, a motivated basis of a text, for a text can be assumed as a product of the communicative situation. It is within the field of emergence the models of text, that can be potentially prognosticated in a certain communicative situation, are designated. Every text can be assumed as conceptual system structured on the base of certain communicative situation. However in the process of ‘structuring’ of a certain model of ‘conceptual system’ consciousness of a recipient is able act only within the border of the field of emergence for going out of this border indicates misunderstanding of the communicative situation. On the base of communicative situation we can witness the increment of meaning where the synergizing of the informative model of communication, formed by using of the invariant units of a language system, is a result of verbalization of the communicative situation. The potential of the models of a text, prognosticated within the field of emergence, also depends on the communicative situation. The conception ‘the field of emergence’ is interpreted as a unit of the language system, having poly-directed universal structure, implying the presence of the core, the center and the periphery, including different levels of means of a functioning system of language, both in terms of linguistic resources, and in terms of extra linguistic factors interaction of which results increment of a text. The conception ‘field of emergence’ is considered as the most promising in the analysis of texts: oral, written, printed and electronic. As a unit of the language system field of emergence has several properties that predict its use during the study of a text in different levels. This work is an attempt analysis of entropy in a text in the aspect of lingua-cultural code, prognosticated within the model of the field of emergence. The article describes the problem of entropy in the field of emergence, caused by influence of the extra-linguistic factors. The increasing of entropy is caused not only by the fact of intrusion of the language resources but by influence of the alien culture in a whole, and by appearance of non-typical for this very culture symbols in the field of emergence. The borrowing of alien lingua-cultural symbols into the lingua-culture of the author is a reason of increasing the entropy when constructing a text both in meaning and in structuring level. It is nothing but artificial formatting of lexical units that violate stylistic unity of a phrase. It is marked that one of the important characteristics descending the entropy in the field of emergence is a typical similarity of lexical and semantic resources of the different lingua-cultures in aspects of extra linguistic factors.Keywords: communicative situation, field of emergence, lingua-culture, entropy
Procedia PDF Downloads 3621027 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets
Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah
Abstract:
Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs
Procedia PDF Downloads 4881026 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 1001025 Synthesis and Characterization of pH-Responsive Nanocarriers Based on POEOMA-b-PDPA Block Copolymers for RNA Delivery
Authors: Bruno Baptista, Andreia S. R. Oliveira, Patricia V. Mendonca, Jorge F. J. Coelho, Fani Sousa
Abstract:
Drug delivery systems are designed to allow adequate protection and controlled delivery of drugs to specific locations. These systems aim to reduce side effects and control the biodistribution profile of drugs, thus improving therapeutic efficacy. This study involved the synthesis of polymeric nanoparticles, based on amphiphilic diblock copolymers, comprising a biocompatible, poly (oligo (ethylene oxide) methyl ether methacrylate (POEOMA) as hydrophilic segment and a pH-sensitive block, the poly (2-diisopropylamino)ethyl methacrylate) (PDPA). The objective of this work was the development of polymeric pH-responsive nanoparticles to encapsulate and carry small RNAs as a model to further develop non-coding RNAs delivery systems with therapeutic value. The responsiveness of PDPA to pH allows the electrostatic interaction of these copolymers with nucleic acids at acidic pH, as a result of the protonation of the tertiary amine groups of this polymer at pH values below its pKa (around 6.2). Initially, the molecular weight parameters and chemical structure of the block copolymers were determined by size exclusion chromatography (SEC) and nuclear magnetic resonance (1H-NMR) spectroscopy, respectively. Then, the complexation with small RNAs was verified, generating polyplexes with sizes ranging from 300 to 600 nm and with encapsulation efficiencies around 80%, depending on the molecular weight of the polymers, their composition, and concentration used. The effect of pH on the morphology of nanoparticles was evaluated by scanning electron microscopy (SEM) being verified that at higher pH values, particles tend to lose their spherical shape. Since this work aims to develop systems for the delivery of non-coding RNAs, studies on RNA protection (contact with RNase, FBS, and Trypsin) and cell viability were also carried out. It was found that they induce some protection against constituents of the cellular environment and have no cellular toxicity. In summary, this research work contributes to the development of pH-sensitive polymers, capable of protecting and encapsulating RNA, in a relatively simple and efficient manner, to further be applied on drug delivery to specific sites where pH may have a critical role, as it can occur in several cancer environments.Keywords: drug delivery systems, pH-responsive polymers, POEOMA-b-PDPA, small RNAs
Procedia PDF Downloads 2591024 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices
Authors: Alena Kulikova, Tatjana Kanonire
Abstract:
Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing
Procedia PDF Downloads 801023 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams
Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin
Abstract:
Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 5081022 Research Networks and Knowledge Sharing: An Exploratory Study of Aquaculture in Europe
Authors: Zeta Dooly, Aidan Duane
Abstract:
The collaborative European funded research and development landscape provides prime environmental conditions for multi-disciplinary teams to learn and enhance their knowledge beyond the capability of training and learning within their own organisation cocoons. Whilst the emergence of the academic entrepreneur has changed the focus of educational institutions to that of quasi-businesses, the training and professional development of lecturers and academic staff are often not formalised to the same level as industry. This research focuses on industry and academic collaborative research funded by the European Commission. The impact of research is scalable if an optimum research network is created and managed effectively. This paper investigates network embeddedness, the nature of relationships, links, and nodes within a research network, and the enhancement of the network’s knowledge. The contribution of this paper extends our understanding of establishing and maintaining effective collaborative research networks. The effects of network embeddedness are recognized in the literature as pertinent to innovation and the economy. Network theory literature claims that networks are essential to innovative clusters such as Silicon valley and innovation in high tech industries. This research provides evidence to support the impact collaborative research has on the disparate individuals toward their innovative contributions to their organisations and their own professional development. This study adopts a qualitative approach and uncovers some of the challenges of multi-disciplinary research through case study insights. The contribution of this paper recommends the establishment of scaffolding to accommodate cooperation in research networks, role appointment, and addressing contextual complexities early to avoid problem cultivation. Furthermore, it suggests recommendations in relation to network formation, intra-network challenges in relation to open data, competition, friendships, and competency enhancement. The network capability is enhanced by the adoption of the relevant theories; network theory, open innovation, and social exchange, with the understanding that the network structure has an impact on innovation and social exchange in research networks. The research concludes that there is an opportunity to deepen our understanding of the impact of network reuse and network hoping that provides scaffolding for the network members to enhance and build upon their knowledge using a progressive approach.Keywords: research networks, competency building, network theory, case study
Procedia PDF Downloads 1261021 Changes in Skin Microbiome Diversity According to the Age of Xian Women
Authors: Hanbyul Kim, Hye-Jin Kin, Taehun Park, Woo Jun Sul, Susun An
Abstract:
Skin is the largest organ of the human body and can provide the diverse habitat for various microorganisms. The ecology of the skin surface selects distinctive sets of microorganisms and is influenced by both endogenous intrinsic factors and exogenous environmental factors. The diversity of the bacterial community in the skin also depends on multiple host factors: gender, age, health status, location. Among them, age-related changes in skin structure and function are attributable to combinations of endogenous intrinsic factors and exogenous environmental factors. Skin aging is characterized by a decrease in sweat, sebum and the immune functions thus resulting in significant alterations in skin surface physiology including pH, lipid composition, and sebum secretion. The present study gives a comprehensive clue on the variation of skin microbiota and the correlations between ages by analyzing and comparing the metagenome of skin microbiome using Next Generation Sequencing method. Skin bacterial diversity and composition were characterized and compared between two different age groups: younger (20 – 30y) and older (60 - 70y) Xian, Chinese women. A total of 73 healthy women meet two conditions: (I) living in Xian, China; (II) maintaining healthy skin status during the period of this study. Based on Ribosomal Database Project (RDP) database, skin samples of 73 participants were enclosed with ten most abundant genera: Chryseobacterium, Propionibacterium, Enhydrobacter, Staphylococcus and so on. Although these genera are the most predominant genus overall, each genus showed different proportion in each group. The most dominant genus, Chryseobacterium was more present relatively in Young group than in an old group. Similarly, Propionibacterium and Enhydrobacter occupied a higher proportion of skin bacterial composition of the young group. Staphylococcus, in contrast, inhabited more in the old group. The beta diversity that represents the ratio between regional and local species diversity showed significantly different between two age groups. Likewise, The Principal Coordinate Analysis (PCoA) values representing each phylogenetic distance in the two-dimensional framework using the OTU (Operational taxonomic unit) values of the samples also showed differences between the two groups. Thus, our data suggested that the composition and diversification of skin microbiomes in adult women were largely affected by chronological and physiological skin aging.Keywords: next generation sequencing, age, Xian, skin microbiome
Procedia PDF Downloads 1551020 Identification of Igneous Intrusions in South Zallah Trough-Sirt Basin
Authors: Mohamed A. Saleem
Abstract:
Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, twelve igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and …S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in N-S profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, were deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data
Procedia PDF Downloads 1131019 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring
Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield
Abstract:
Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2
Procedia PDF Downloads 2231018 Production of Bio-Composites from Cocoa Pod Husk for Use in Packaging Materials
Authors: L. Kanoksak, N. Sukanya, L. Napatsorn, T. Siriporn
Abstract:
A growing population and demand for packaging are driving up the usage of natural resources as raw materials in the pulp and paper industry. Long-term effects of environmental is disrupting people's way of life all across the planet. Finding pulp sources to replace wood pulp is therefore necessary. To produce wood pulp, various other potential plants or plant parts can be employed as substitute raw materials. For example, pulp and paper were made from agricultural residue that mainly included pulp can be used in place of wood. In this study, cocoa pod husks were an agricultural residue of the cocoa and chocolate industries. To develop composite materials to replace wood pulp in packaging materials. The paper was coated with polybutylene adipate-co-terephthalate (PBAT). By selecting and cleaning fresh cocoa pod husks, the size was reduced. And the cocoa pod husks were dried. The morphology and elemental composition of cocoa pod husks were studied. To evaluate the mechanical and physical properties, dried cocoa husks were extracted using the soda-pulping process. After selecting the best formulations, paper with a PBAT bioplastic coating was produced on a paper-forming machine Physical and mechanical properties were studied. By using the Field Emission Scanning Electron Microscope/Energy Dispersive X-Ray Spectrometer (FESEM/EDS) technique, the structure of dried cocoa pod husks showed the main components of cocoa pod husks. The appearance of porous has not been found. The fibers were firmly bound for use as a raw material for pulp manufacturing. Dry cocoa pod husks contain the major elements carbon (C) and oxygen (O). Magnesium (Mg), potassium (K), and calcium (Ca) were minor elements that were found in very small levels. After that cocoa pod husks were removed from the soda-pulping process. It found that the SAQ5 formula produced pulp yield, moisture content, and water drainage. To achieve the basis weight by TAPPI T205 sp-02 standard, cocoa pod husk pulp and modified starch were mixed. The paper was coated with bioplastic PBAT. It was produced using bioplastic resin from the blown film extrusion technique. It showed the contact angle, dispersion component and polar component. It is an effective hydrophobic material for rigid packaging applications.Keywords: cocoa pod husks, agricultural residue, composite material, rigid packaging
Procedia PDF Downloads 761017 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 1041016 Effect of a Synthetic Platinum-Based Complex on Autophagy Induction in Leydig TM3 Cells
Authors: Ezzati Givi M., Hoveizi E., Nezhad Marani N.
Abstract:
Platinum-based anticancer therapeutics are the most widely used drugs in clinical chemotherapy but have major limitations and various side effects in clinical applications. Gonadotoxicity and sterility is one of the most common complications for cancer survivors, which seem to be drug-specific and dose-related. Therefore, many efforts have been dedicated to discovering a new structure of platinum-based anticancer agents with improved therapeutic index, fewer side effects. In this regard, new Pt(II)-phosphane complexes containing heterocyclic thionate ligands (PCTL) have been synthesized, which show more potent antitumor activities in comparison to cisplatin. Cisplatin, the best leading metal-based antitumor drug in the field, induces testicular toxicity on Leydig and Sertoli cells leading to serious side effects such as azoospermia and infertility. Therefore in the present study, we aimed to investigate the cytotoxicity effect of PCTL on mice TM4 Sertoli cells with particular emphasis on the role of autophagy in comparison to cisplatin. In this study, an MTT assay was performed to evaluate the IC50 of PCTL and to analyze the TM3 Leydig cell's viability. Cells morphology was evaluated via invert microscope and Changing in morphology for nuclei swelling or autophagic vacuoles formation were assessed by DAPI and MDC staining. Testosterone production in the culture medium was measured using an ELISA kit. Finally, the expression of Autophagy-related genes, Atg5, Beclin1 and p62, were analyzed by qPCR. Based on the obtained results by MTT, the IC50 value of PCTL was 50 μM in TM3 cells and cytotoxic effects was in a dose- and time-dependent manner. Cells morphological changes investigated by inverted microscopy, DAPI, and MDC staining which showed the cytotoxic concentrations of PCTL was significantly higher than cisplatin in the treated TM3 Leydig cells. The results of PCR showed a lack of expression of the p62, Atg5 and Beclin1 gene in TM3 cells treated with PCTL in comparison to cisplatin and control groups. It should be noted that the effects of 25 μM PCTL concentration on TM3 cells have been associated with increased testosterone production and secretion, which requires further study to explain the possible causes and involved molecular mechanisms. The results of the study showed that the PCTL had less-lethal effects on TM3 cells in comparison to cisplatin and probably did not induce autophagy in TM3 cells.Keywords: platinum-based anticancer agents, cisplatin, Leydig TM3 cells, autophagy
Procedia PDF Downloads 1281015 A Corpus-Based Contrastive Analysis of Directive Speech Act Verbs in English and Chinese Legal Texts
Authors: Wujian Han
Abstract:
In the process of human interaction and communication, speech act verbs are considered to be the most active component and the main means for information transmission, and are also taken as an indication of the structure of linguistic behavior. The theoretical value and practical significance of such everyday built-in metalanguage have long been recognized. This paper, which is part of a bigger study, is aimed to provide useful insights for a more precise and systematic application to speech act verbs translation between English and Chinese, especially with regard to the degree to which generic integrity is maintained in the practice of translation of legal documents. In this study, the corpus, i.e. Chinese legal texts and their English translations, English legal texts, ordinary Chinese texts, and ordinary English texts, serve as a testing ground for examining contrastively the usage of English and Chinese directive speech act verbs in legal genre. The scope of this paper is relatively wide and essentially covers all directive speech act verbs which are used in ordinary English and Chinese, such as order, command, request, prohibit, threat, advice, warn and permit. The researcher, by combining the corpus methodology with a contrastive perspective, explored a range of characteristics of English and Chinese directive speech act verbs including their semantic, syntactic and pragmatic features, and then contrasted them in a structured way. It has been found that there are similarities between English and Chinese directive speech act verbs in legal genre, such as similar semantic components between English speech act verbs and their translation equivalents in Chinese, formal and accurate usage of English and Chinese directive speech act verbs in legal contexts. But notable differences have been identified in areas of difference between their usage in the original Chinese and English legal texts such as valency patterns and frequency of occurrences. For example, the subjects of some directive speech act verbs are very frequently omitted in Chinese legal texts, but this is not the case in English legal texts. One of the practicable methods to achieve adequacy and conciseness in speech act verb translation from Chinese into English in legal genre is to repeat the subjects or the message with discrepancy, and vice versa. In addition, translation effects such as overuse and underuse of certain directive speech act verbs are also found in the translated English texts compared to the original English texts. Legal texts constitute a particularly valuable material for speech act verb study. Building up such a contrastive picture of the Chinese and English speech act verbs in legal language would yield results of value and interest to legal translators and students of language for legal purposes and have practical application to legal translation between English and Chinese.Keywords: contrastive analysis, corpus-based, directive speech act verbs, legal texts, translation between English and Chinese
Procedia PDF Downloads 4991014 Wind Energy Harvester Based on Triboelectricity: Large-Scale Energy Nanogenerator
Authors: Aravind Ravichandran, Marc Ramuz, Sylvain Blayac
Abstract:
With the rapid development of wearable electronics and sensor networks, batteries cannot meet the sustainable energy requirement due to their limited lifetime, size and degradation. Ambient energies such as wind have been considered as an attractive energy source due to its copious, ubiquity, and feasibility in nature. With miniaturization leading to high-power and robustness, triboelectric nanogenerator (TENG) have been conceived as a promising technology by harvesting mechanical energy for powering small electronics. TENG integration in large-scale applications is still unexplored considering its attractive properties. In this work, a state of the art design TENG based on wind venturi system is demonstrated for use in any complex environment. When wind introduces into the air gap of the homemade TENG venturi system, a thin flexible polymer repeatedly contacts with and separates from electrodes. This device structure makes the TENG suitable for large scale harvesting without massive volume. Multiple stacking not only amplifies the output power but also enables multi-directional wind utilization. The system converts ambient mechanical energy to electricity with 400V peak voltage by charging of a 1000mF super capacitor super rapidly. Its future implementation in an array of applications aids in environment friendly clean energy production in large scale medium and the proposed design performs with an exhaustive material testing. The relation between the interfacial micro-and nano structures and the electrical performance enhancement is comparatively studied. Nanostructures are more beneficial for the effective contact area, but they are not suitable for the anti-adhesion property due to the smaller restoring force. Considering these issues, the nano-patterning is proposed for further enhancement of the effective contact area. By considering these merits of simple fabrication, outstanding performance, robust characteristic and low-cost technology, we believe that TENG can open up great opportunities not only for powering small electronics, but can contribute to large-scale energy harvesting through engineering design being complementary to solar energy in remote areas.Keywords: triboelectric nanogenerator, wind energy, vortex design, large scale energy
Procedia PDF Downloads 2131013 The importance of Clinical Pharmacy and Computer Aided Drug Design
Authors: Peter Edwar Mortada Nasif
Abstract:
The use of CAD (Computer Aided Design) technology is ubiquitous in the architecture, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of architecture schools in Nigeria as an important part of the training module. This article examines the ethical issues involved in implementing CAD (Computer Aided Design) content into the architectural education curriculum. Using existing literature, this study begins with the benefits of integrating CAD into architectural education and the responsibilities of different stakeholders in the implementation process. It also examines issues related to the negative use of information technology and the perceived negative impact of CAD use on design creativity. Using a survey method, data from the architecture department of Chukwuemeka Odumegwu Ojukwu Uli University was collected to serve as a case study on how the issues raised were being addressed. The article draws conclusions on what ensures successful ethical implementation. Millions of people around the world suffer from hepatitis C, one of the world's deadliest diseases. Interferon (IFN) is treatment options for patients with hepatitis C, but these treatments have their side effects. Our research focused on developing an oral small molecule drug that targets hepatitis C virus (HCV) proteins and has fewer side effects. Our current study aims to develop a drug based on a small molecule antiviral drug specific for the hepatitis C virus (HCV). Drug development using laboratory experiments is not only expensive, but also time-consuming to conduct these experiments. Instead, in this in silicon study, we used computational techniques to propose a specific antiviral drug for the protein domains of found in the hepatitis C virus. This study used homology modeling and abs initio modeling to generate the 3D structure of the proteins, then identifying pockets in the proteins. Acceptable lagans for pocket drugs have been developed using the de novo drug design method. Pocket geometry is taken into account when designing ligands. Among the various lagans generated, a new specific for each of the HCV protein domains has been proposed.Keywords: drug design, anti-viral drug, in-silicon drug design, hepatitis C virus, computer aided design, CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication
Procedia PDF Downloads 221012 Walking the Talk? Thinking and Acting – Teachers' and Practitioners' Perceptions about Physical Activity, Health and Well-Being, Do They 'Walk the Talk' ?
Authors: Kristy Howells, Catherine Meehan
Abstract:
This position paper presents current research findings into the proposed gap between teachers’ and practitioners’ thinking and acting about physical activity health and well-being in childhood. Within the new Primary curriculum, there is a focus on sustained physical activity within a Physical Education and healthy lifestyles in Personal, Health, Social and Emotional lessons, but there is no curriculum guidance about what sustained physical activity is and how it is defined. The current health guidance on birth to five suggests that children should not be inactive for long periods and specify light and energetic activities, however there is the a suggested period of time per day for young children to achieve, but the guidance does not specify how this should be measured. The challenge therefore for teachers and practitioners is their own confidence and understanding of what “good / moderate intensity” physical activity and healthy living looks like for children and the children understanding what they are doing. There is limited research about children from birth to eight years and also the perceptions and attitudes of those who work with this age group of children, however it was found that children at times can identify different levels of activity and it has been found that children can identify healthy foods and good choices for healthy living at a basic level. Authors have also explored teachers’ beliefs about teaching and learning and found that teachers could act in accordance to their beliefs about their subject area only when their subject knowledge, understanding and confidence of that area is high. It has been proposed that confidence and competence of practitioners and teachers to integrate ‘well-being’ within the learning settings has been reported as being low. This may be due to them not having high subject knowledge. It has been suggested that children’s life chances are improved by focusing on well-being in their earliest years. This includes working with parents and families, and being aware of the environmental contexts that may impact on children’s wellbeing. The key is for practitioners and teachers to know how to implement these ideas effectively as these key workers have a profound effect on young children as role models and due to the time of waking hours spent with them. The position paper is part of a longitudinal study at Canterbury Christ Church University and currently we will share the research findings from the initial questionnaire (online, postal, and in person) that explored and evaluated the knowledge, competence and confidence levels of practitioners and teachers as to the structure and planning of sustained physical activity and healthy lifestyles and how this progresses with the children’s age.Keywords: health, perceptions, physical activity, well-being
Procedia PDF Downloads 4031011 Enhancing Thai In-Service Science Teachers' Technological Pedagogical Content Knowledge Integrating Local Context and Sufficiency Economy into Science Teaching
Authors: Siriwan Chatmaneerungcharoen
Abstract:
An emerging body of ‘21st century skills’-such as adaptability, complex communication skills, technology skills and the ability to solve non-routine problems--are valuable across a wide range of jobs in the national economy. Within the Thai context, a focus on the Philosophy of Sufficiency Economy is integrated into Science Education. Thai science education has advocated infusing 21st century skills and Philosophy of Sufficiency Economy into the school curriculum and several educational levels have launched such efforts. Therefore, developing science teachers to have proper knowledge is the most important factor to success of the goals. The purposes of this study were to develop 40 Cooperative Science teachers’ Technological Pedagogical Content Knowledge (TPACK) and to develop Professional Development Model integrated with Co-teaching Model and Coaching System (Co-TPACK). TPACK is essential to career development for teachers. Forty volunteer In-service teachers who were science cooperative teachers participated in this study for 2 years. Data sources throughout the research project consisted of teacher refection, classroom observations, Semi-structure interviews, Situation interview, questionnaires and document analysis. Interpretivist framework was used to analyze the data. Findings indicate that at the beginning, the teachers understood only the meaning of Philosophy of Sufficiency Economy but they did not know how to integrate the Philosophy of Sufficiency Economy into their science classrooms. Mostly, they preferred to use lecture based teaching and experimental teaching styles. While the Co- TPACK was progressing, the teachers had blended their teaching styles and learning evaluation methods. Co-TPACK consists of 3 cycles (Student Teachers’ Preparation Cycle, Cooperative Science Teachers Cycle, Collaboration cycle (Co-teaching, Co-planning, and Co-Evaluating and Coaching System)).The Co-TPACK enhances the 40 cooperative science teachers, student teachers and university supervisor to exchange their knowledge and experience on teaching science. There are many channels that they used for communication including online. They have used more Phuket context-integrated lessons, technology-integrated teaching and Learning that can explicit Philosophy of Sufficiency Economy. Their sustained development is shown in their lesson plans and teaching practices.Keywords: technological pedagogical content knowledge, philosophy of sufficiency economy, professional development, coaching system
Procedia PDF Downloads 4641010 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches
Authors: Mariam Matiashvili
Abstract:
Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon
Procedia PDF Downloads 701009 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 1351008 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory
Authors: Liqin Zhang, Liang Yan
Abstract:
This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization
Procedia PDF Downloads 1251007 Altering Surface Properties of Magnetic Nanoparticles with Single-Step Surface Modification with Various Surface Active Agents
Authors: Krupali Mehta, Sandip Bhatt, Umesh Trivedi, Bhavesh Bharatiya, Mukesh Ranjan, Atindra D. Shukla
Abstract:
Owing to the dominating surface forces and large-scale surface interactions, the nano-scale particles face difficulties in getting suspended in various media. Magnetic nanoparticles of iron oxide offer a great deal of promise due to their ease of preparation, reasonable magnetic properties, low cost and environmental compatibility. We intend to modify the surface of magnetic Fe₂O₃ nanoparticles with selected surface modifying agents using simple and effective single-step chemical reactions in order to enhance dispersibility of magnetic nanoparticles in non-polar media. Magnetic particles were prepared by hydrolysis of Fe²⁺/Fe³⁺ chlorides and their subsequent oxidation in aqueous medium. The dried particles were then treated with Octadecyl quaternary ammonium silane (Terrasil™), stearic acid and gallic acid ester of stearyl alcohol in ethanol separately to yield S-2 to S-4 respectively. The untreated Fe₂O₃ was designated as S-1. The surface modified nanoparticles were then analysed with Dynamic Light Scattering (DLS), Fourier Transform Infrared spectroscopy (FTIR), X-Ray Diffraction (XRD), Thermogravimetric Gravimetric Analysis (TGA) and Scanning Electron Microscopy and Energy dispersive X-Ray analysis (SEM-EDAX). Characterization reveals the particle size averaging 20-50 nm with and without modification. However, the crystallite size in all cases remained ~7.0 nm with the diffractogram matching to Fe₂O₃ crystal structure. FT-IR suggested the presence of surfactants on nanoparticles’ surface, also confirmed by SEM-EDAX where mapping of elements proved their presence. TGA indicated the weight losses in S-2 to S-4 at 300°C onwards suggesting the presence of organic moiety. Hydrophobic character of modified surfaces was confirmed with contact angle analysis, all modified nanoparticles showed super hydrophobic behaviour with average contact angles ~129° for S-2, ~139.5° for S-3 and ~151° for S-4. This indicated that surface modified particles are super hydrophobic and they are easily dispersible in non-polar media. These modified particles could be ideal candidates to be suspended in oil-based fluids, polymer matrices, etc. We are pursuing elaborate suspension/sedimentation studies of these particles in various oils to establish this conjecture.Keywords: iron nanoparticles, modification, hydrophobic, dispersion
Procedia PDF Downloads 1411006 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.Keywords: construction supply chain management, modeling, operations research, optimization, simulation
Procedia PDF Downloads 5031005 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities
Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings
Procedia PDF Downloads 931004 The Extension of the Kano Model by the Concept of Over-Service
Authors: Lou-Hon Sun, Yu-Ming Chiu, Chen-Wei Tao, Chia-Yun Tsai
Abstract:
It is common practice for many companies to ask employees to provide heart-touching service for customers and to emphasize the attitude of 'customer first'. However, services may not necessarily gain praise, and may actually be considered excessive, if customers do not appreciate such behaviors. In reality, many restaurant businesses try to provide as much service as possible without taking into account whether over-provision may lead to negative customer reception. A survey of 894 people in Britain revealed that 49 percent of respondents consider over-attentive waiters the most annoying aspect of dining out. It can be seen that merely aiming to exceed customers’ expectations without actually addressing their needs, only further distances and dissociates the standard of services from the goals of customer satisfaction itself. Over-service is defined, as 'service provided that exceeds customer expectations, or simply that customers deemed redundant, resulting in negative perception'. It was found that customers’ reactions and complaints concerning over-service are not as intense as those against service failures caused by the inability to meet expectations; consequently, it is more difficult for managers to become aware of the existence of over-service. Thus the ability to manage over-service behaviors is a significant topic for consideration. The Kano model classifies customer preferences into five categories: attractive quality attribute, one-dimensional quality attribute, must-be quality attribute, indifferent quality attribute and reverse quality attributes. The model is still very popular for researchers to explore the quality aspects and customer satisfaction. Nevertheless, several studies indicated that Kano’s model could not fully capture the nature of service quality. The concept of over-service can be used to restructure the model and provide a better understanding of the service quality construct. In this research, the structure of Kano's two-dimensional questionnaire will be used to classify the factors into different dimensions. The same questions will be used in the second questionnaire for identifying the over-service experienced of the respondents. The finding of these two questionnaires will be used to analyze the relevance between service quality classification and over-service behaviors. The subjects of this research are customers of fine dining chain restaurants. Three hundred questionnaires will be issued based on the stratified random sampling method. Items for measurement will be derived from DINESERV scale. The tangible dimension of the questionnaire will be eliminated due to this research is focused on the employee behaviors. Quality attributes of the Kano model are often regarded as an instrument for improving customer satisfaction. The concept of over-service can be used to restructure the model and provide a better understanding of service quality construct. The extension of the Kano model will not only develop a better understanding of customer needs and expectations but also enhance the management of service quality.Keywords: consumer satisfaction, DINESERV, kano model, over-service
Procedia PDF Downloads 1611003 Microscopic Insights into Water Transport Through a Biomimetic Artificial Water Nano-Channels-Polyamide Membrane
Authors: Aziz Ghoufi, Ayman Kanaan
Abstract:
Clean water is ubiquitous from drinking to agriculture and from energy supply to industrial manufacturing. Since the conventional water sources are becoming increasingly rare, the development of new technologies for water supply is crucial to address the world’s clean water needs in the 21st century. Desalination is in many regards the most promising approach to long-term water supply since it potentially delivers an unlimited source of fresh water. Seawater desalination using reverse osmosis (RO) membranes has become over the past decade a standard approach to produce fresh water. While this technology has proven to be efficient, it remains however relatively costly in terms of energy input due to the use of high-pressure pumps resulting of the low water permeation through polymeric RO membranes. Recently, water channels incorporated in lipidic and polymeric membranes were demonstrated to provide a selective water translocation that enables to break permeability- selectivity trade-off. Biomimetic Artificial Water channels (AWCs) are becoming highly attractive systems to achieve a selective transport of water. The first developed AWCs formed from imidazole quartet (I-quartet) embedded in lipidic membranes exhibited an ion selectivity higher than AQPs however associated with a lower water flow performance. Recently it has been conducted pioneer work in this field with the fabrication of the first AWC@Polyamide(PA) composite membrane with outstanding desalination performance. However, the microscopic desalination mechanism in play is still unknown and its understanding represents the shortest way for a long-term conception and design of AWC@PA composite membranes with better performance. In this work we gain an unprecedented fundamental understanding and rationalization of the nanostructuration of the AWC@PA membranes and the microscopic mechanism at the origin of their water transport performance from advanced molecular simulations. Using osmotic molecular dynamics simulations and a non-equilibrium method with water slab control, we demonstrate an increase in porosity near the AWC@PA interfaces, enhancing water transport without compromising the rejection rate. Indeed, the water transport pathways exhibit a single-file structure connected by hydrogen bonds. Finally, by comparing AWC@PA and PA membranes, we show that the difference in water flux aligns well with experimental results, validating the model used.Keywords: water desalination, biomimetic membranes, molecular simulation, nanochannels
Procedia PDF Downloads 17