Search results for: non-equilibrium Green’s function
130 Anti-tuberculosis, Resistance Modulatory, Anti-pulmonary Fibrosis and Anti-silicosis Effects of Crinum Asiaticum Bulbs and Its Active Metabolite, Betulin
Authors: Theophilus Asante, Comfort Nyarko, Daniel Antwi
Abstract:
Drug-resistant tuberculosis, together with the associated comorbidities like pulmonary fibrosis and silicosis, has been one of the most serious global public health threats that requires immediate action to curb or mitigate it. This prolongs hospital stays, increases the cost of medication, and increases the death toll recorded annually. Crinum asiaticum bulb (CAE) and betulin (BET) are known for their biological and pharmacological effects. Pharmacological effects reported on CAE include antimicrobial, anti-inflammatory, anti-pyretic, anti-analgesic, and anti-cancer effects. Betulin has exhibited a multitude of powerful pharmacological properties ranging from antitumor, anti-inflammatory, anti-parasitic, anti-microbial, and anti-viral activities. This work sought to investigate the anti-tuberculosis and resistant modulatory effects and also assess their effects on mitigating pulmonary fibrosis and silicosis. In the anti-tuberculosis and resistant modulatory effects, both CAE and BET showed strong antimicrobial activities (31.25 ≤ MIC ≤ 500) µg/ml against the studied microorganisms and also produced significant anti-efflux pump and biofilm inhibitory effects (ρ < 0.0001) as well as exhibiting resistance modulatory and synergistic effects when combined with standard antibiotics. Crinum asiaticum bulbs extract and betulin were shown to possess anti-pulmonary fibrosis effects. There was an increased survival rate in the CAE and BET treatment groups compared to the BLM-induced group. There was a marked decrease in the levels of hydroxyproline and collagen I and III in the CAE and BET treatment groups compared to the BLM-treated group. The treatment groups of CAE and BET significantly downregulated the levels of pro-fibrotic and pro-inflammatory cytokine concentrations such as TGF-β1, MMP9, IL-6, IL-1β and TNF-alpha compared to an increase in the BLM-treated groups. The histological findings of the lungs suggested the curative effects of CAE and BET following BLM-induced pulmonary fibrosis in mice. The study showed improved lung functions with a wide focal area of viable alveolar spaces and few collagen fibers deposition on the lungs of the treatment groups. In the anti-silicosis and pulmonoprotective effects of CAE and BET, the levels of NF-κB, TNF-α, IL-1β, IL-6 and hydroxyproline, collagen types I and III were significantly reduced by CAE and BET (ρ < 0.0001). Both CAE and BET significantly (ρ < 0.0001) inhibited the levels of hydroxyproline, collagen I and III when compared with the negative control group. On BALF biomarkers such as macrophages, lymphocytes, monocytes, and neutrophils, CAE and BET were able to reduce their levels significantly (ρ < 0.0001). The CAE and BET were examined for anti-oxidant activity and shown to raise the levels of catalase (CAT) and superoxide dismutase (SOD) while lowering the level of malondialdehyde (MDA). There was an improvement in lung function when lung tissues were examined histologically. Crinum asiaticum bulbs extract and betulin were discovered to exhibit anti-tubercular and resistance-modulatory properties, as well as the capacity to minimize TB comorbidities such as pulmonary fibrosis and silicosis. In addition, CAE and BET may act as protective mechanisms, facilitating the preservation of the lung's physiological integrity. The outcomes of this study might pave the way for the development of leads for producing single medications for the management of drug-resistant tuberculosis and its accompanying comorbidities.Keywords: fibrosis, crinum, tuberculosis, antiinflammation, drug resistant
Procedia PDF Downloads 83129 Impact of Urban Migration on Caste: Rohinton Mistry’s a Fine Balance and Rural-to-Urban Caste Migration in India
Authors: Mohua Dutta
Abstract:
The primary aim of this research paper is to investigate the forced urban migration of Dalits in India who are fleeing caste persecution in rural areas. This paper examines the relationship between caste and rural-to-urban internal migration in India using a literary text, Rohinton Mistry’s A Fine Balance, highlighting the challenges faced by Dalits in rural areas that force them to migrate to urban areas. Despite the prevalence of such discussions in Dalit autobiographies written in vernacular languages, there is a lack of discussion regarding caste migration in Indian English Literature, including this present text, as evidenced by the existing critical interpretations of the novel, which this paper seeks to rectify. The primary research question is how urban migration affects caste system in India and why rural-to-urban caste migration occurs. The purpose of this paper is to better understand the reasons for Dalit migration, the challenges they face in rural and urban areas, and the lingering influence of caste in both rural and urban areas. The study reveals that the promise of mobility and emancipation provided by class operations drives rural-to-urban caste migration in India, but it also reveals that caste marginalization in rural areas is closely linked to class marginalization and other forms of subalternity in urban areas. Moreover, the caste system persists in urban areas as well, making Dalit migrants more vulnerable to social, political, and economic discrimination. The reason for this is that, despite changes in profession and urban migration, the trapped structure of caste capital and family networks exposes migrants to caste and class oppressions. To reach its conclusion, this study employs a variety of methodologies. Discourse analysis is used to investigate the current debates and narratives surrounding caste migration. Critical race theory, specifically intersectional theory and social constructivism, aids in comprehending the complexities of caste, class, and migration. Mistry's novel is subjected to textual analysis in order to identify and interpret references to caste migration. Secondary data, such as theoretical understanding of the caste system in operation and scholarly works on caste migration, are also used to support and strengthen the findings and arguments presented in the paper. The study concludes that rural-to-urban caste migration in India is primarily motivated by the promise of socioeconomic mobility and emancipation offered by urban spaces. However, the caste system persists in urban areas, resulting in the continued marginalisation and discrimination of Dalit migrants. The study also highlights the limitations of urban migration in providing true emancipation for Dalit migrants, as they remain trapped within caste and family network structures. Overall, the study raises awareness of the complexities surrounding caste migration and its impact on the lives of India's marginalised communities. This study contributes to the field of Migration Studies by shedding light on an often-overlooked issue: Dalit migration. It challenges existing literary critical interpretations by emphasising the significance of caste migration in Indian English Literature. The study also emphasises the interconnectedness of caste and class, broadening understanding of how these systems function in both rural and urban areas.Keywords: rural-to-urban caste migration in india, internal migration in india, caste system in india, dalit movement in india, rooster coop of caste and class, urban poor as subalterns
Procedia PDF Downloads 72128 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass
Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski
Abstract:
The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals
Procedia PDF Downloads 435127 Effects on Inflammatory Biomarkers and Respiratory Mechanics in Laparoscopic Bariatric Surgery: Desflurane vs. Total Intravenous Anaesthesia with Propofol
Authors: L. Kashyap, S. Jha, D. Shende, V. K. Mohan, P. Khanna, A. Aravindan, S. Kashyap, L. Singh, S. Aggarwal
Abstract:
Obesity is associated with a chronic inflammatory state. During surgery, there is an interplay between anaesthetic and surgical stress vis-a-vis the already present complex immune state. Moreover, the postoperative period is dictated by inflammation, which is crucial for wound healing and regeneration. An excess of inflammatory response might hamper recovery besides increasing the risk for infection and complications. There is definite evidence of the immunosuppressive role of inhaled anaesthetic agents. This immune modulation may be brought into effect directly by influencing the innate and adaptive immunity cells. The effects of propofol on immune mechanisms in has been widely elucidated because of its popularity. It reduces superoxide generation, elastase release, and chemotaxis. However, there is no unequivocal proof of one’s superiority over the other. Hence, an anaesthetic regimen with lesser inflammatory potential and specific to the obese patient is needed. OBESITA trial protocol (2019) by Sousa and co-workers in progress aims to test the hypothesis that anaesthesia with sevoflurane results in a weaker proinflammatory response compared to propofol, as evidenced by lower IL-6 and other biomarkers and an increased macrophage differentiation into M2 phenotype in adipose tissue. IL-6 was used as the objective parameter to evaluate inflammation as it is regulated by both surgery and anesthesia. It is the most sensitive marker of the inflammatory response to tissue damage since it is released within minutes by blood leukocytes. We hypothesized that maintenance of anaesthesia with propofol would lead to less inflammation than that with desflurane. Aims: The effect of two anaesthetic techniques, total intravenous anaesthesia (TIVA) with propofol and desflurane, on surgical stress response was evaluated. The primary objective was to compare serum interleukin-6 (IL-6) levels before and after surgery. Methods: In this prospective single-blinded randomized controlled trial undertaken, 30 obese patients (BMI>30 kg/m2) undergoing laparoscopic bariatric surgery under general anaesthesia were recruited. Patients were randomized to receive desflurane or TIVA using a target-controlled infusion for maintenance of anaesthesia. As a marker of inflammation, pre-and post-surgery IL-6 levels were compared. Results: After surgery, IL-6 levels increased significantly in both groups. The rise in IL-6 was less with TIVA than with desflurane; however, it did not reach significance. IL-6 rise post-surgery correlated positively with the complexity of procedure and duration of surgery and anaesthesia, rather than anaesthetic technique. Both groups did not differ in terms of intra-operative hemodynamic and respiratory variables, time to awakening, postoperative pulmonary complications, and duration of hospital stay. The incidence of nausea was significantly higher with desflurane than with TIVA. Conclusion: Inflammatory response did not differ as a function of anaesthetic technique when propofol and desflurane were compared. Also, patient and surgical variables dictated post-operative inflammation more than the anaesthetic factors. Further, larger sample size is needed to confirm or refute these findings.Keywords: bariatric, biomarkers, inflammation, laparoscopy
Procedia PDF Downloads 123126 Relationship Between Brain Entropy Patterns Estimated by Resting State fMRI and Child Behaviour
Authors: Sonia Boscenco, Zihan Wang, Euclides José de Mendoça Filho, João Paulo Hoppe, Irina Pokhvisneva, Geoffrey B.C. Hall, Michael J. Meaney, Patricia Pelufo Silveira
Abstract:
Entropy can be described as a measure of the number of states of a system, and when used in the context of physiological time-based signals, it serves as a measure of complexity. In functional connectivity data, entropy can account for the moment-to-moment variability that is neglected in traditional functional magnetic resonance imaging (fMRI) analyses. While brain fMRI resting state entropy has been associated with some pathological conditions like schizophrenia, no investigations have explored the association between brain entropy measures and individual differences in child behavior in healthy children. We describe a novel exploratory approach to evaluate brain fMRI resting state data in two child cohorts, and MAVAN (N=54, 4.5 years, 48% males) and GUSTO (N = 206, 4.5 years, 48% males) and its associations to child behavior, that can be used in future research in the context of child exposures and long-term health. Following rs-fMRI data pre-processing and Shannon entropy calculation across 32 network regions of interest to acquire 496 unique functional connections, partial correlation coefficient analysis adjusted for sex was performed to identify associations between entropy data and Strengths and Difficulties questionnaire in MAVAN and Child Behavior Checklist domains in GUSTO. Significance was set at p < 0.01, and we found eight significant associations in GUSTO. Negative associations were found between two frontoparietal regions and cerebellar posterior and oppositional defiant problems, (r = -0.212, p = 0.006) and (r = -0.200, p = 0.009). Positive associations were identified between somatic complaints and four default mode connections: salience insula (r = 0.202, p < 0.01), dorsal attention intraparietal sulcus (r = 0.231, p = 0.003), language inferior frontal gyrus (r = 0.207, p = 0.008) and language posterior superior temporal gyrus (r = 0.210, p = 0.008). Positive associations were also found between insula and frontoparietal connection and attention deficit / hyperactivity problems (r = 0.200, p < 0.01), and insula – default mode connection and pervasive developmental problems (r = 0.210, p = 0.007). In MAVAN, ten significant associations were identified. Two positive associations were found = with prosocial scores: the salience prefrontal cortex and dorsal attention connection (r = 0.474, p = 0.005) and the salience supramarginal gyrus and dorsal attention intraparietal sulcus (r = 0.447, p = 0.008). The insula and prefrontal connection were negatively associated with peer problems (r = -0.437, p < 0.01). Conduct problems were negatively associated with six separate connections, the left salience insula and right salience insula (r = -0.449, p = 0.008), left salience insula and right salience supramarginal gyrus (r = -0.512, p = 0.002), the default mode and visual network (r = -0.444, p = 0.009), dorsal attention and language network (r = -0.490, p = 0.003), and default mode and posterior parietal cortex (r = -0.546, p = 0.001). Entropy measures of resting state functional connectivity can be used to identify individual differences in brain function that are correlated with variation in behavioral problems in healthy children. Further studies applying this marker into the context of environmental exposures are warranted.Keywords: child behaviour, functional connectivity, imaging, Shannon entropy
Procedia PDF Downloads 202125 Molecular Migration in Polyvinyl Acetate Matrix: Impact of Compatibility, Number of Migrants and Stress on Surface and Internal Microstructure
Authors: O. Squillace, R. L. Thompson
Abstract:
Migration of small molecules to, and across the surface of polymer matrices is a little-studied problem with important industrial applications. Tackifiers in adhesives, flavors in foods and binding agents in paints all present situations where the function of a product depends on the ability of small molecules to migrate through a polymer matrix to achieve the desired properties such as softness, dispersion of fillers, and to deliver an effect that is felt (or tasted) on a surface. It’s been shown that the chemical and molecular structure, surface free energies, phase behavior, close environment and compatibility of the system, influence the migrants’ motion. When differences in behavior, such as occurrence of segregation to the surface or not, are observed it is then of crucial importance to identify and get a better understanding of the driving forces involved in the process of molecular migration. In this aim, experience is meant to be allied with theory in order to deliver a validated theoretical and computational toolkit to describe and predict these phenomena. The systems that have been chosen for this study aim to address the effect of polarity mismatch between the migrants and the polymer matrix and that of a second migrant over the first one. As a non-polar resin polymer, polyvinyl acetate is used as the material to which more or less polar migrants (sorbitol, carvone, octanoic acid (OA), triacetin) are to be added. Through contact angle measurement a surface excess is seen for sorbitol (polar) mixed with PVAc as the surface energy is lowered compare to the one of pure PVAc. This effect is increased upon the addition of carvon or triacetin (non-polars). Surface micro-structures are also evidenced by atomic force microscopy (AFM). Ion beam analysis (Nuclear Reaction Analysis), supplemented by neutron reflectometry can accurately characterize the self-organization of surfactants, oligomers, aromatic molecules in polymer films in order to relate the macroscopic behavior to the length scales that are amenable to simulation. The nuclear reaction analysis (NRA) data for deuterated OA 20% shows the evidence of a surface excess which is enhanced after annealing. The addition of 10% triacetin, as a second migrant, results in the formation of an underlying layer enriched in triacetin below the surface excess of OA. The results show that molecules in polarity mismatch with the matrix tend to segregate to the surface, and this is favored by the addition of a second migrant of the same polarity than the matrix. As studies have been restricted to materials that are model supported films under static conditions in a first step, it is also wished to address the more challenging conditions of materials under controlled stress or strain. To achieve this, a simple rig and PDMS cell have been designed to stretch the material to a defined strain and to probe these mechanical effects by ion beam analysis and atomic force microscopy. This will make a significant step towards exploring the influence of extensional strain on surface segregation, flavor release in cross-linked rubbers.Keywords: polymers, surface segregation, thin films, molecular migration
Procedia PDF Downloads 132124 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines
Authors: Kamyar Tolouei, Ehsan Moosavi
Abstract:
In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization
Procedia PDF Downloads 105123 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience
Authors: Amanda Kavner, Richard Lamb
Abstract:
Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience
Procedia PDF Downloads 119122 Tales of Two Cities: 'Motor City' Detroit and 'King Cotton' Manchester: Transatlantic Transmissions and Transformations, Flows of Communications, Commercial and Cultural Connections
Authors: Dominic Sagar
Abstract:
Manchester ‘King Cotton’, the first truly industrial city of the nineteenth century, passing on the baton to Detroit ‘Motor City’, is the first truly modern city. We are exploring the tales of the two cities, their rise and fall and subsequent post-industrial decline, their transitions and transformations, whilst alongside paralleling their corresponding, commercial, cultural, industrial and even agricultural, artistic and musical transactions and connections. The paper will briefly contextualize how technologies of the industrial age and modern age have been instrumental in the development of these cities and other similar cities including New York. However, the main focus of the study will be the present and more importantly the future, how globalisation and the advancements of digital technologies and industries have shaped the cities developments from AlanTuring and the making of the first programmable computer to the effect of digitalisation and digital initiatives. Manchester now has a thriving creative digital infrastructure of Digilabs, FabLabs, MadLabs and hubs, the study will reference the Smart Project and the Manchester Digital Development Association whilst paralleling similar digital and creative industrial initiatives now starting to happen in Detroit. The paper will explore other topics including the need to allow for zones of experimentation, areas to play, think and create in order develop and instigate new initiatives and ideas of production, carrying on the tradition of influential inventions throughout the history of these key cities. Other topics will be briefly touched on, such as urban farming, citing the Biospheric foundation in Manchester and other similar projects in Detroit. However, the main thread will focus on the music industries and how they are contributing to the regeneration of cities. Musically and artistically, Manchester and Detroit have been closely connected by the flow and transmission of information and transfer of ideas via ‘cars and trains and boats and planes’ through to the new ‘super highway’. From Detroit to Manchester often via New York and Liverpool and back again, these musical and artistic connections and flows have greatly affected and influenced both cities and the advancement of technology are still connecting the cities. In summary two hugely important industrial cities, subsequently both experienced massive decline in fortunes, having had their large industrial hearts ripped out, ravaged leaving dying industrial carcasses and car crashes of despair, dereliction, desolation and post-industrial wastelands vacated by a massive exodus of the cities’ inhabitants. To examine the affinity, similarity and differences between Manchester & Detroit, from their industrial importance to their post-industrial decline and their current transmutations, transformations, transient transgressions, cities in transition; contrasting how they have dealt with these problems and how they can learn from each other. With a view to framing these topics with regard to how various communities have shaped these cities and the creative industries and design [the new cotton/car manufacturing industries] are reinventing post-industrial cities, to speculate on future development of these themes in relation to Globalisation, digitalisation and how cities can function to develop solutions to communal living in cities of the future.Keywords: cultural capital, digital developments, musical initiatives, zones of experimentation
Procedia PDF Downloads 194121 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM
Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi
Abstract:
FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics
Procedia PDF Downloads 136120 Benefits of High Power Impulse Magnetron Sputtering (HiPIMS) Method for Preparation of Transparent Indium Gallium Zinc Oxide (IGZO) Thin Films
Authors: Pavel Baroch, Jiri Rezek, Michal Prochazka, Tomas Kozak, Jiri Houska
Abstract:
Transparent semiconducting amorphous IGZO films have attracted great attention due to their excellent electrical properties and possible utilization in thin film transistors or in photovoltaic applications as they show 20-50 times higher mobility than that of amorphous silicon. It is also known that the properties of IGZO films are highly sensitive to process parameters, especially to oxygen partial pressure. In this study we have focused on the comparison of properties of transparent semiconducting amorphous indium gallium zinc oxide (IGZO) thin films prepared by conventional sputtering methods and those prepared by high power impulse magnetron sputtering (HiPIMS) method. Furthermore we tried to optimize electrical and optical properties of the IGZO thin films and to investigate possibility to apply these coatings on thermally sensitive flexible substrates. We employed dc, pulsed dc, mid frequency sine wave and HiPIMS power supplies for magnetron deposition. Magnetrons were equipped with sintered ceramic InGaZnO targets. As oxygen vacancies are considered to be the main source of the carriers in IGZO films, it is expected that with the increase of oxygen partial pressure number of oxygen vacancies decreases which results in the increase of film resistivity. Therefore in all experiments we focused on the effect of oxygen partial pressure, discharge power and pulsed power mode on the electrical, optical and mechanical properties of IGZO thin films and also on the thermal load deposited to the substrate. As expected, we have observed a very fast transition between low- and high-resistivity films depending on oxygen partial pressure when deposition using conventional sputtering methods/power supplies have been utilized. Therefore we established and utilized HiPIMS sputtering system for enlargement of operation window for better control of IGZO thin film properties. It is shown that with this system we are able to effectively eliminate steep transition between low and high resistivity films exhibited by DC mode of sputtering and the electrical resistivity can be effectively controlled in the wide resistivity range of 10-² to 10⁵ Ω.cm. The highest mobility of charge carriers (up to 50 cm2/V.s) was obtained at very low oxygen partial pressures. Utilization of HiPIMS also led to significant decrease in thermal load deposited to the substrate which is beneficial for deposition on the thermally sensitive and flexible polymer substrates. Deposition rate as a function of discharge power and oxygen partial pressure was also systematically investigated and the results from optical, electrical and structure analysis will be discussed in detail. Most important result which we have obtained demonstrates almost linear control of IGZO thin films resistivity with increasing of oxygen partial pressure utilizing HiPIMS mode of sputtering and highly transparent films with low resistivity were prepared already at low pO2. It was also found that utilization of HiPIMS technique resulted in significant improvement of surface smoothness in reactive mode of sputtering (with increasing of oxygen partial pressure).Keywords: charge carrier mobility, HiPIMS, IGZO, resistivity
Procedia PDF Downloads 297119 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan
Authors: Hsien-Te Lin
Abstract:
The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy
Procedia PDF Downloads 139118 Creation of a Test Machine for the Scientific Investigation of Chain Shot
Authors: Mark McGuire, Eric Shannon, John Parmigiani
Abstract:
Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.Keywords: chain shot, safety, testing, timber harvesters
Procedia PDF Downloads 152117 PARP1 Links Transcription of a Subset of RBL2-Dependent Genes with Cell Cycle Progression
Authors: Ewelina Wisnik, Zsolt Regdon, Kinga Chmielewska, Laszlo Virag, Agnieszka Robaszkiewicz
Abstract:
Apart from protecting genome, PARP1 has been documented to regulate many intracellular processes inter alia gene transcription by physically interacting with chromatin bound proteins and by their ADP-ribosylation. Our recent findings indicate that expression of PARP1 decreases during the differentiation of human CD34+ hematopoietic stem cells to monocytes as a consequence of differentiation-associated cell growth arrest and formation of E2F4-RBL2-HDAC1-SWI/SNF repressive complex at the promoter of this gene. Since the RBL2 complexes repress genes in a E2F-dependent manner and are widespread in the genome in G0 arrested cells, we asked (a) if RBL2 directly contributes to defining monocyte phenotype and function by targeting gene promoters and (b) if RBL2 controls gene transcription indirectly by repressing PARP1. For identification of genes controlled by RBL2 and/or PARP1,we used primer libraries for surface receptors and TLR signaling mediators, genes were silenced by siRNA or shRNA, analysis of gene promoter occupation by selected proteins was carried out by ChIP-qPCR, while statistical analysis in GraphPad Prism 5 and STATISTICA, ChIP-Seq data were analysed in Galaxy 2.5.0.0. On the list of 28 genes regulated by RBL2, we identified only four solely repressed by RBL2-E2F4-HDAC1-BRM complex. Surprisingly, 24 out of 28 emerged genes controlled by RBL2 were co-regulated by PARP1 in six different manners. In one mode of RBL2/PARP1 co-operation, represented by MAP2K6 and MAPK3, PARP1 was found to associate with gene promoters upon RBL2 silencing, which was previously shown to restore PARP1 expression in monocytes. PARP1 effect on gene transcription was observed only in the presence of active EP300, which acetylated gene promoters and activated transcription. Further analysis revealed that PARP1 binding to MA2K6 and MAPK3 promoters enabled recruitment of EP300 in monocytes, while in proliferating cancer cell lines, which actively transcribe PARP1, this protein maintained EP300 at the promoters of MA2K6 and MAPK3. Genome-wide analysis revealed a similar distribution of PARP1 and EP300 around transcription start sites and the co-occupancy of some gene promoters by PARP1 and EP300 in cancer cells. Here, we described a new RBL2/PARP1/EP300 axis which controls gene transcription regardless of the cell type. In this model cell, cycle-dependent transcription of PARP1 regulates expression of some genes repressed by RBL2 upon cell cycle limitation. Thus, RBL2 may indirectly regulate transcription of some genes by controlling the expression of EP300-recruiting PARP1. Acknowledgement: This work was financed by Polish National Science Centre grants nr DEC-2013/11/D/NZ2/00033 and DEC-2015/19/N/NZ2/01735. L.V. is funded by the National Research, Development and Innovation Office grants GINOP-2.3.2-15-2016-00020 TUMORDNS, GINOP-2.3.2-15-2016-00048-STAYALIVE and OTKA K112336. AR is supported by Polish Ministry of Science and Higher Education 776/STYP/11/2016.Keywords: retinoblastoma transcriptional co-repressor like 2 (RBL2), poly(ADP-ribose) polymerase 1 (PARP1), E1A binding protein p300 (EP300), monocytes
Procedia PDF Downloads 209116 Michel Foucault’s Docile Bodies and The Matrix Trilogy: A Close Reading Applied to the Human Pods and Growing Fields in the Films
Authors: Julian Iliev
Abstract:
The recent release of The Matrix Resurrections persuaded many film scholars that The Matrix trilogy had lost its appeal and its concepts were largely outdated. This study examines the human pods and growing fields in the trilogy. Their functionality is compared to Michel Foucault’s concept of docile bodies: linking fictional and contemporary worlds. This paradigm is scrutinized through surveillance literature. The analogy brings to light common elements of hidden surveillance practices in technologies. The comparison illustrates the effects of body manipulation portrayed in the movies and their relevance with contemporary surveillance practices. Many scholars have utilized a close reading methodology in film studies (J.Bizzocchi, J.Tanenbaum, P.Larsen, S. Herbrechter, and Deacon et al.). The use of a particular lens through which media text is examined is an indispensable factor that needs to be incorporated into the methodology. The study spotlights both scenes from the trilogy depicting the human pods and growing fields. The functionality of the pods and the fields compare directly with Foucault’s concept of docile bodies. By utilizing Foucault’s study as a lens, the research will unearth hidden components and insights into the films. Foucault recognizes three disciplines that produce docile bodies: 1) manipulation and the interchangeability of individual bodies, 2) elimination of unnecessary movements and management of time, and 3) command system guaranteeing constant supervision and continuity protection. These disciplines can be found in the pods and growing fields. Each body occupies a single pod aiding easier manipulation and fast interchangeability. The movement of the bodies in the pods is reduced to the absolute minimum. Thus, the body is transformed into the ultimate object of control – minimum movement correlates to maximum energy generation. Supervision is exercised by wiring the body with numerous types of cables. This ultimate supervision of body activity reduces the body’s purpose to mere functioning. If a body does not function as an energy source, then it’s unplugged, ejected, and liquefied. The command system secures the constant supervision and continuity of the process. To Foucault, the disciplines are distinctly different from slavery because they stop short of a total takeover of the bodies. This is a clear difference from the slave system implemented in the films. Even though their system might lack sophistication, it makes up for it in the elevation of functionality. Further, surveillance literature illustrates the connection between the generation of body energy in The Matrix trilogy to the generation of individual data in contemporary society. This study found that the three disciplines producing docile bodies were present in the portrayal of the pods and fields in The Matrix trilogy. The above comparison combined with surveillance literature yields insights into analogous processes and contemporary surveillance practices. Thus, the constant generation of energy in The Matrix trilogy can be equated to the consistent data generation in contemporary society. This essay shows the relevance of the body manipulation concept in the Matrix films with contemporary surveillance practices.Keywords: docile bodies, film trilogies, matrix movies, michel foucault, privacy loss, surveillance
Procedia PDF Downloads 93115 Characterizing the Spatially Distributed Differences in the Operational Performance of Solar Power Plants Considering Input Volatility: Evidence from China
Authors: Bai-Chen Xie, Xian-Peng Chen
Abstract:
China has become the world's largest energy producer and consumer, and its development of renewable energy is of great significance to global energy governance and the fight against climate change. The rapid growth of solar power in China could help achieve its ambitious carbon peak and carbon neutrality targets early. However, the non-technical costs of solar power in China are much higher than at international levels, meaning that inefficiencies are rooted in poor management and improper policy design and that efficiency distortions have become a serious challenge to the sustainable development of the renewable energy industry. Unlike fossil energy generation technologies, the output of solar power is closely related to the volatile solar resource, and the spatial unevenness of solar resource distribution leads to potential efficiency spatial distribution differences. It is necessary to develop an efficiency evaluation method that considers the volatility of solar resources and explores the mechanism of the influence of natural geography and social environment on the spatially varying characteristics of efficiency distribution to uncover the root causes of managing inefficiencies. The study sets solar resources as stochastic inputs, introduces a chance-constrained data envelopment analysis model combined with the directional distance function, and measures the solar resource utilization efficiency of 222 solar power plants in representative photovoltaic bases in northwestern China. By the meta-frontier analysis, we measured the characteristics of different power plant clusters and compared the differences among groups, discussed the mechanism of environmental factors influencing inefficiencies, and performed statistical tests through the system generalized method of moments. Rational localization of power plants is a systematic project that requires careful consideration of the full utilization of solar resources, low transmission costs, and power consumption guarantee. Suitable temperature, precipitation, and wind speed can improve the working performance of photovoltaic modules, reasonable terrain inclination can reduce land cost, and the proximity to cities strongly guarantees the consumption of electricity. The density of electricity demand and high-tech industries is more important than resource abundance because they trigger the clustering of power plants to result in a good demonstration and competitive effect. To ensure renewable energy consumption, increased support for rural grids and encouraging direct trading between generators and neighboring users will provide solutions. The study will provide proposals for improving the full life-cycle operational activities of solar power plants in China to reduce high non-technical costs and improve competitiveness against fossil energy sources.Keywords: solar power plants, environmental factors, data envelopment analysis, efficiency evaluation
Procedia PDF Downloads 90114 The 10,000 Fold Effect of Retrograde Neurotransmission: A New Concept for Cerebral Palsy Revival by the Use of Nitric Oxide Donars
Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta
Abstract:
Background: Nitric Oxide Donars (NODs) (intrathecal sodium nitroprusside (ITSNP) and oral tadalafil 20mg post ITSNP) has been studied in this context in cerebral palsy patients for fast recovery. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a) Retrograde Neurotransmission (acute cases): 1) Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2) Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b) Vasopasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c) Long-Term Potentiation (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Design: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat cerebral palsy cases. Case-control prospective study. Materials and Methods: The experimental population included 82 cerebral palsy patients (10 patients were given control treatments without NOD or with 5% dextrose superfusion, and 72 patients comprised the NOD group). The mean time for superfusion was 5 months post-cerebral palsy. Pre- and post-NOD status was monitored by Gross Motor Function Classification System for Cerebral Palsy (GMFCS), MRI, and TCD studies. Results: After 7 days in the NOD group, the mean change in the GMFCS score was an increase of 1.2 points mean; after 3 months, there was an increase of 3.4 points mean, compared to the control-group increase of 0.1 points at 3 months. MRI and TCD documented the improvements. Conclusions: NOD (ITSNP boosts up the recovery and oral tadalafil maintains the recovery to a well-desired level) acts swiftly in the treatment of CP, acting within 7 days on 5 months post-cerebral palsy either of the three mechanisms.Keywords: cerebral palsy, intrathecal sodium nitroprusside, oral tadalafil, perforators, vasodilations, retrograde transmission, the 10, 000-fold effect, long-term potantiation
Procedia PDF Downloads 362113 A Novel Upregulated circ_0032746 on Sponging with MIR4270 Promotes the Proliferation and Migration of Esophageal Squamous Cell Carcinoma
Authors: Sachin Mulmi Shrestha, Xin Fang, Hui Ye, Lihua Ren, Qinghua Ji, Ruihua Shi
Abstract:
Background: Esophageal squamous cell carcinoma (ESCC) is a tumor arising from esophageal epithelial cells and is one of the major disease subtype in Asian countries, including China. Esophageal cancer is the 7th highest incidence based on the 2020 data of GLOBOCAN. The pathogenesis of cancer is still not well understood as many molecular and genetic basis of esophageal carcinogenesis has yet to be clearly elucidated. Circular RNAs are RNA molecules that are formed by back-splicing covalently joined 3′- and 5′-endsrather than canonical splicing, and recent data suggest circular RNAs could sponge miRNAs and are enriched with functional miRNA binding sites. Hence, we studied the mechanism of circular RNA, its biological function, and the relationship between microRNA in the carcinogenesis of ESCC. Methods: 4 pairs of normal and esophageal cancer tissues were collected in Zhongda hospital, affiliated to Southeast University, and high-throughput RNA sequencing was done. The result revealed that circ_0032746 was upregulated, and thus we selected circ_0032746 for further study. The backsplice junction of circRNA was validated by sanger sequence, and stability was determined by RNASE R assay. The binding site of circRNA and microRNA was predicted by circinteractome,mirandaand RNAhybrid database. Furthermore, circRNA was silenced by siRNA and then by lentivirus. The regulatory axis of circ0032746/miR4270 was validated by shRNA, mimic, and inhibitor transfection. Then, in vitro experiments were performed to assess the role of circ0032746 on proliferation (CCK-8 assay and colon formation assay), migration and invasion (Transewell assay), and apoptosis of ESCC. Results: The upregulated circ0032746 was validated in 9 pairs of tissues and 5 types of cell lines by qPCR, which showed high expression and was statistically significant (P<0.005) ). Upregulated circ0032746 was silenced by shRNA, which showed significant knockdown in KYSE 30 and TE-1 cell lines expression compared to control. Nuclear and cytoplasmic mRNA fraction experiment displayed the cytoplasmic location of circ0032746. The sponging of miR4270 was validated by co-transfection of sh-circ0032746 and mimic or inhibitor. Transfection with mimic showed the decreased expression of circ_0032746, whereas inhibitor inhibited the result. In vitro experiments showed that silencing of circ_0032746 inhibited the proliferation, migration, and invasion compared to the negative control group. The apoptosis was seen higher in a knockdown group than in the control group. Furthermore, 11 common mircoRNA target mRNAs were predicted by Targetscan, MirTarbase, and miRanda database, which may further play role in the pathogenesis. Conclusion: Our results showed that novel circ_0032746 is upregulated in ESCC and plays role in itsoncogenicity. Silencing of circ_0032746 inhibits the proliferation and migration of ESCC whereas increases the apoptosis of cancer cells. Hence, circ0032746 acts as an oncogene in ESCC by sponging with miR4270 and could be a potential biomarker in the diagnosis of ESCC in the future.Keywords: circRNA, esophageal squamous cell carcinoma, microRNA, upregulated
Procedia PDF Downloads 113112 Applications of Polyvagal Theory for Trauma in Clinical Practice: Auricular Acupuncture and Herbology
Authors: Aurora Sheehy, Caitlin Prince
Abstract:
Within current orthodox medical protocols, trauma and mental health issues are deemed to reside within the realm of cognitive or psychological therapists and are marginalised in these areas, in part due to limited drugs option available, mostly manipulating neurotransmitters or sedating patients to reduce symptoms. By contrast, this research presents examples from the clinical practice of how trauma can be assessed and treated physiologically. Adverse Childhood Experiences (ACEs) are a tally of different types of abuse and neglect. It has been used as a measurable and reliable predictor of the likelihood of the development of autoimmune disease. It is a direct way to demonstrate reliably the health impact of traumatic life experiences. A second assessment tool is Allostatic Load, which refers to the cumulative effects that chronic stress has on mental and physical health. It records the decline of an individual’s physiological capacity to cope with their experience. It uses a specific grouping of serum testing and physical measures. It includes an assessment of neuroendocrine, cardiovascular, immune and metabolic systems. Allostatic load demonstrates the health impact that trauma has throughout the body. It forms part of an initial intake assessment in clinical practice and could also be used in research to evaluate treatment. Examining medicinal plants for their physiological, neurological and somatic effects through the lens of Polyvagal theory offers new opportunities for trauma treatments. In situations where Polyvagal theory recommends activities and exercises to enable parasympathetic activation, many herbs that affect Effector Memory T (TEM) cells also enact these responses. Traditional or Indigenous European herbs show the potential to support the polyvagal tone, through multiple mechanisms. As the ventral vagal nerve reaches almost every major organ, plants that have actions on these tissues can be understood via their polyvagal actions, such as monoterpenes as agents to improve respiratory vagal tone, cyanogenic glycosides to reset polyvagal tone, volatile oils rich in phenyl methyl esters improve both sympathetic and parasympathetic tone, bitters activate gut function and can strongly promote parasympathetic regulation. Auricular Acupuncture uses a system of somatotopic mapping of the auricular surface overlaid with an image of an inverted foetus with each body organ and system featured. Given that the concha of the auricle is the only place on the body where the Vagus Nerve neurons reach the surface of the skin, several investigators have evaluated non-invasive, transcutaneous electrical nerve stimulation (TENS) at auricular points. Drawn from an interdisciplinary evidence base and developed through clinical practice, these assessment and treatment tools are examples of practitioners in the field innovating out of necessity for the best outcomes for patients. This paper draws on case studies to direct future research.Keywords: polyvagal, auricular acupuncture, trauma, herbs
Procedia PDF Downloads 91111 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 63110 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 74109 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo
Authors: Shpetim Rezniqi
Abstract:
Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.Keywords: globalisation, finance, crisis, recomandation, bank, credits
Procedia PDF Downloads 389108 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 93107 Rheological Properties of Thermoresponsive Poly(N-Vinylcaprolactam)-g-Collagen Hydrogel
Authors: Serap Durkut, A. Eser Elcin, Y. Murat Elcin
Abstract:
Stimuli-sensitive polymeric hydrogels have received extensive attention in the biomedical field due to their sensitivity to physical and chemical stimuli (temperature, pH, ionic strength, light, etc.). This study describes the rheological properties of a novel thermoresponsive poly(N-vinylcaprolactam)-g-collagen hydrogel. In the study, we first synthesized a facile and novel synthetic carboxyl group-terminated thermo-responsive poly(N-vinylcaprolactam)-COOH (PNVCL-COOH) via free radical polymerization. Further, this compound was effectively grafted with native collagen, by utilizing the covalent bond between the carboxylic acid groups at the end of the chains and amine groups of the collagen using cross-linking agent (EDC/NHS), forming PNVCL-g-Col. Newly-formed hybrid hydrogel displayed novel properties, such as increased mechanical strength and thermoresponsive characteristics. PNVCL-g-Col showed low critical solution temperature (LCST) at 38ºC, which is very close to the body temperature. Rheological studies determine structural–mechanical properties of the materials and serve as a valuable tool for characterizing. The rheological properties of hydrogels are described in terms of two dynamic mechanical properties: the elastic modulus G′ (also known as dynamic rigidity) representing the reversible stored energy of the system, and the viscous modulus G″, representing the irreversible energy loss. In order to characterize the PNVCL-g-Col, the rheological properties were measured in terms of the function of temperature and time during phase transition. Below the LCST, favorable interactions allowed the dissolution of the polymer in water via hydrogen bonding. At temperatures above the LCST, PNVCL molecules within PNVCL-g-Col aggregated due to dehydration, causing the hydrogel structure to become dense. When the temperature reached ~36ºC, both the G′ and G″ values crossed over. This indicates that PNVCL-g-Col underwent a sol-gel transition, forming an elastic network. Following temperature plateau at 38ºC, near human body temperature the sample displayed stable elastic network characteristics. The G′ and G″ values of the PNVCL-g-Col solutions sharply increased at 6-9 minute interval, due to rapid transformation into gel-like state and formation of elastic networks. Copolymerization with collagen leads to an increase in G′, as collagen structure contains a flexible polymer chain, which bestows its elastic properties. Elasticity of the proposed structure correlates with the number of intermolecular cross-links in the hydrogel network, increasing viscosity. However, at 8 minutes, G′ and G″ values sharply decreased for pure collagen solutions due to the decomposition of the elastic and viscose network. Complex viscosity is related to the mechanical performance and resistance opposing deformation of the hydrogel. Complex viscosity of PNVCL-g-Col hydrogel was drastically changed with temperature and the mechanical performance of PNVCL-g-Col hydrogel network increased, exhibiting lesser deformation. Rheological assessment of the novel thermo-responsive PNVCL-g-Col hydrogel, exhibited that the network has stronger mechanical properties due to both permanent stable covalent bonds and physical interactions, such as hydrogen- and hydrophobic bonds depending on temperature.Keywords: poly(N-vinylcaprolactam)-g-collagen, thermoresponsive polymer, rheology, elastic modulus, stimuli-sensitive
Procedia PDF Downloads 243106 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface
Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis
Abstract:
In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.Keywords: polymorphic, locomotion, sinusoidal curves, parametric
Procedia PDF Downloads 105105 Providing Leadership in Nigerian University Education Research Enterprise: The Imperative of Research Ethics
Authors: O. O. Oku, K. S. Jerry-Alagbaoso
Abstract:
It is universally acknowledged that the primary function of universities is the generation and dissemination of knowledge. This mission is pursued through the research component of the university programme especially at the post-graduate level. The senior academic staff teach, supervise and provide general academic leadership to post-graduate students who are expected to carry out research leading to the presentation of dissertation as requirement for the award of doctoral degree in their various disciplines. Carrying out the research enterprises involves a lot of corroboration among individuals and communities. The need to safeguard the interest of everyone involved in the enterprise makes the development of ethical standard in research imperative. Ensuring the development and effective application of such ethical standard falls within the leadership role of the vice –chancellors, Deans of post-graduate schools/ faculties, Heads of Departments and supervisors. It is the relevance and application of such ethical standard in Nigerian university research efforts that this study discussed. The study adopted the descriptive research design. A researcher-made 4 point rating scale was used to elicit information from the post-graduate dissertation supervisors sampled from one university each from the six geo-political zones in Nigeria using the purposive sampling technique. The data collected was analysed using the mean score and standard deviation. The findings of the study include among others that there are several cases of unethical practices by Ph.D dissertation students in Nigerian universities. Prominent among these include duplicating research topics, making unauthorized copies of data paper or computer programme, failing to acknowledge contributions of relevant people and authors, rigging an experiment to prempt the result among others. Some of the causes of the unethical practices according to the respondents include inadequate funding of universities resulting in inadequate remuneration for university teachers, inadequacy of equipment and infrastructures, poor supervision of Ph.D students,’ poverty on the side of the student researchers and non-application of sanctions on violators. Improved funding of the Nigerian universities system with emphasis on both staff and student research efforts, admitting academic oriented students into the Ph.D programme and ensuring the application of appropriate sanctions in cases of unethical conduct in research featured prominently in the needed leadership imperatives. Based on the findings of the study, the researchers recommend the development of university research policies that is closely tied to each university’s strategic plan. Such plan should explain the research focus that will attract more funding and direct students interest towards it without violating the principle of academic freedom. The plan should also incorporate the establishment of a research administration office to provide the necessary link between the students and funding agencies and also organise training for supervisors on leadership activities expected of them while educating students on the processes involved in carrying out a qualitative and acceptable research study. Such exercise should include the ethical principles and guidelines that comprise all parts of research from research topic through the literature review to the design and the truthful reporting of results.Keywords: academic leadership, ethical standards, research stakeholders, research enterprise
Procedia PDF Downloads 242104 Delivering User Context-Sensitive Service in M-Commerce: An Empirical Assessment of the Impact of Urgency on Mobile Service Design for Transactional Apps
Authors: Daniela Stephanie Kuenstle
Abstract:
Complex industries such as banking or insurance experience slow growth in mobile sales. While today’s mobile applications are sophisticated and enable location based and personalized services, consumers prefer online or even face-to-face services to complete complex transactions. A possible reason for this reluctance is that the provided service within transactional mobile applications (apps) does not adequately correspond to users’ needs. Therefore, this paper examines the impact of the user context on mobile service (m-service) in m-commerce. Motivated by the potential which context-sensitive m-services hold for the future, the impact of temporal variations as a dimension of user context, on m-service design is examined. In particular, the research question asks: Does consumer urgency function as a determinant of m-service composition in transactional apps by moderating the relation between m-service type and m-service success? Thus, the aim is to explore the moderating influence of urgency on m-service types, which includes Technology Mediated Service and Technology Generated Service. While mobile applications generally comprise features of both service types, this thesis discusses whether unexpected urgency changes customer preferences for m-service types and how this consequently impacts the overall m-service success, represented by purchase intention, loyalty intention and service quality. An online experiment with a random sample of N=1311 participants was conducted. Participants were divided into four treatment groups varying in m-service types and urgency level. They were exposed to two different urgency scenarios (high/ low) and two different app versions conveying either technology mediated or technology generated service. Subsequently, participants completed a questionnaire to measure the effectiveness of the manipulation as well as the dependent variables. The research model was tested for direct and moderating effects of m-service type and urgency on m-service success. Three two-way analyses of variance confirmed the significance of main effects, but demonstrated no significant moderation of urgency on m-service types. The analysis of the gathered data did not confirm a moderating effect of urgency between m-service type and service success. Yet, the findings propose an additive effects model with the highest purchase and loyalty intention for Technology Generated Service and high urgency, while Technology Mediated Service and low urgency demonstrate the strongest effect for service quality. The results also indicate an antagonistic relation between service quality and purchase intention depending on the level of urgency. Although a confirmation of the significance of this finding is required, it suggests that only service convenience, as one dimension of mobile service quality, delivers conditional value under high urgency. This suggests a curvilinear pattern of service quality in e-commerce. Overall, the paper illustrates the complex interplay of technology, user variables, and service design. With this, it contributes to a finer-grained understanding of the relation between m-service design and situation dependency. Moreover, the importance of delivering situational value with apps depending on user context is emphasized. Finally, the present study raises the demand to continue researching the impact of situational variables on m-service design in order to develop more sophisticated m-services.Keywords: mobile consumer behavior, mobile service design, mobile service success, self-service technology, situation dependency, user-context sensitivity
Procedia PDF Downloads 268103 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy
Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai
Abstract:
Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy
Procedia PDF Downloads 141102 Regulatory Governance as a De-Parliamentarization Process: A Contextual Approach to Global Constitutionalism and Its Effects on New Arab Legislatures
Authors: Abderrahim El Maslouhi
Abstract:
The paper aims to analyze an often-overlooked dimension of global constitutionalism, which is the rise of the regulatory state and its impact on parliamentary dynamics in transition regimes. In contrast to Majone’s technocratic vision of convergence towards a single regulatory system based on competence and efficiency, national transpositions of regulatory governance and, in general, the relationship to global standards primarily depend upon a number of distinctive parameters. These include policy formation process, speed of change, depth of parliamentary tradition and greater or lesser vulnerability to the normative conditionality of donors, interstate groupings and transnational regulatory bodies. Based on a comparison between three post-Arab Spring countries -Morocco, Tunisia, and Egypt, whose constitutions have undergone substantive review in the period 2011-2014- and some European Union state members, the paper intends, first, to assess the degree of permeability to global constitutionalism in different contexts. A noteworthy divide emerges from this comparison. Whereas European constitutions still seem impervious to the lexicon of global constitutionalism, the influence of the latter is obvious in the recently drafted constitutions in Morocco, Tunisia, and Egypt. This is evidenced by their reference to notions such as ‘governance’, ‘regulators’, ‘accountability’, ‘transparency’, ‘civil society’, and ‘participatory democracy’. Second, the study will provide a contextual account of internal and external rationales underlying the constitutionalization of regulatory governance in the cases examined. Unlike European constitutionalism, where parliamentarism and the tradition of representative government function as a structural mechanism that moderates the de-parliamentarization effect induced by global constitutionalism, Arab constitutional transitions have led to a paradoxical situation; contrary to the public demands for further parliamentarization, the 2011 constitution-makers have opted for a de-parliamentarization pattern. This is particularly reflected in the procedures established by constitutions and regular legislation, to handle the interaction between lawmakers and regulatory bodies. Once the ‘constitutional’ and ‘independent’ nature of these agencies is formally endorsed, the birth of these ‘fourth power’ entities, which are neither elected nor directly responsible to elected officials, will raise the question of their accountability. Third, the paper shows that, even in the three selected countries, the de-parliamentarization intensity is significantly variable. By contrast to the radical stance of the Moroccan and Egyptian constituents who have shown greater concern to shield regulatory bodies from legislatures’ scrutiny, the Tunisian case indicates a certain tendency to provide lawmakers with some essential control instruments (e. g. exclusive appointment power, adversarial discussion of regulators’ annual reports, dismissal power, later held unconstitutional). In sum, the comparison reveals that the transposition of the regulatory state model and, more generally, sensitivity to the legal implications of global conditionality essentially relies on the evolution of real-world power relations at both national and international levels.Keywords: Arab legislatures, de-parliamentarization, global constitutionalism, normative conditionality, regulatory state
Procedia PDF Downloads 138101 Academia as Creator of Emerging, Innovative Communities of Practice and Learning
Authors: Francisco Julio Batle Lorente
Abstract:
The present paper aims at presenting a new category of role for academia: proactive creator/promoter of communities of practice in emerging areas of innovation. It is based in research among practitioners in three different areas: social entrepreneurship, alumni engaged in entrepreneurship and innovation, and digital nomads. The concept of CoP is related to an intentionally created space to share experiences and collectively reflect on the cases arising from practice. Such an endeavour is not contemplated in the literature on academic roles in an explicit way. The goal of the paper is providing a framework for this function and throw some light on the perception and priorities of members of emerging communities (78 alumni, 154 social entrepreneurs, and 231 digital nomads) regarding community, learning, engagement, and networking, areas in which the university can help and, by doing so, contributing to signal the emerging area and creating new opportunities for the academia. The research methodology was based in Survey research. It is a specific type of field study that involves the collection of data from a sample of elements drawn from a well-defined population through the use of a questionnaire. It was considered that survey research might be valuable to the present project and help outline the utility of various study designs and future projects with the emerging communities that are the object of the investigation. Open questions were used for different topics, as well as critical incident technique. It was used a standard technique for survey sampling and questionnaire design. Finally, it was defined a procedure for pretesting questionnaires and for data collection. The questionnaire was channelled by means of google forms. The results indicate that the members of emerging, innovative CoPs and learning such the ones that were selected for this investigation lack cohesion, inspiration, networking, opportunities for creation of social capital, opportunities for collaboration beyond their existing and close network. The opportunity that arises for the academia from proactively helping articulate CoP (and Communities of learning) are related to key elements of any CoP/ CoL: community construction approaches, technological infrastructure, benefits, participation issues and urgent challenges, trust, networking, technical ability/training/development and collaboration. Beyond training, other three areas (networking, collaboration and urgent challenges) were the ones in which the contribution of universities to the communities were considered more interesting and workable to practitioners. The analysis of the responses for the open questions related to perception of the universities offer options for terra incognita to be explored for universities (signalling new areas, establishing broader collaborations with research, government, media and corporations, attracting investment). Based on the findings from this research, there is some evidence that CoPs can offer a formal and informal method of professional and interprofessional development for member of any emerging and innovative community and can decrease social and professional isolation. The opportunity that it offers to academia can increase the entrepreneurial and engaged university identity. It also moves to academia into a realm of civic confrontation of present and future challenges in a more proactive way.Keywords: social innovation, new roles of academia, community of learning, community of practice
Procedia PDF Downloads 83