Search results for: syntactic features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3818

Search results for: syntactic features

98 Delivering User Context-Sensitive Service in M-Commerce: An Empirical Assessment of the Impact of Urgency on Mobile Service Design for Transactional Apps

Authors: Daniela Stephanie Kuenstle

Abstract:

Complex industries such as banking or insurance experience slow growth in mobile sales. While today’s mobile applications are sophisticated and enable location based and personalized services, consumers prefer online or even face-to-face services to complete complex transactions. A possible reason for this reluctance is that the provided service within transactional mobile applications (apps) does not adequately correspond to users’ needs. Therefore, this paper examines the impact of the user context on mobile service (m-service) in m-commerce. Motivated by the potential which context-sensitive m-services hold for the future, the impact of temporal variations as a dimension of user context, on m-service design is examined. In particular, the research question asks: Does consumer urgency function as a determinant of m-service composition in transactional apps by moderating the relation between m-service type and m-service success? Thus, the aim is to explore the moderating influence of urgency on m-service types, which includes Technology Mediated Service and Technology Generated Service. While mobile applications generally comprise features of both service types, this thesis discusses whether unexpected urgency changes customer preferences for m-service types and how this consequently impacts the overall m-service success, represented by purchase intention, loyalty intention and service quality. An online experiment with a random sample of N=1311 participants was conducted. Participants were divided into four treatment groups varying in m-service types and urgency level. They were exposed to two different urgency scenarios (high/ low) and two different app versions conveying either technology mediated or technology generated service. Subsequently, participants completed a questionnaire to measure the effectiveness of the manipulation as well as the dependent variables. The research model was tested for direct and moderating effects of m-service type and urgency on m-service success. Three two-way analyses of variance confirmed the significance of main effects, but demonstrated no significant moderation of urgency on m-service types. The analysis of the gathered data did not confirm a moderating effect of urgency between m-service type and service success. Yet, the findings propose an additive effects model with the highest purchase and loyalty intention for Technology Generated Service and high urgency, while Technology Mediated Service and low urgency demonstrate the strongest effect for service quality. The results also indicate an antagonistic relation between service quality and purchase intention depending on the level of urgency. Although a confirmation of the significance of this finding is required, it suggests that only service convenience, as one dimension of mobile service quality, delivers conditional value under high urgency. This suggests a curvilinear pattern of service quality in e-commerce. Overall, the paper illustrates the complex interplay of technology, user variables, and service design. With this, it contributes to a finer-grained understanding of the relation between m-service design and situation dependency. Moreover, the importance of delivering situational value with apps depending on user context is emphasized. Finally, the present study raises the demand to continue researching the impact of situational variables on m-service design in order to develop more sophisticated m-services.

Keywords: mobile consumer behavior, mobile service design, mobile service success, self-service technology, situation dependency, user-context sensitivity

Procedia PDF Downloads 248
97 Sorbitol Galactoside Synthesis Using β-Galactosidase Immobilized on Functionalized Silica Nanoparticles

Authors: Milica Carević, Katarina Banjanac, Marija ĆOrović, Ana Milivojević, Nevena Prlainović, Aleksandar Marinković, Dejan Bezbradica

Abstract:

Nowadays, considering the growing awareness of functional food beneficial effects on human health, due attention is dedicated to the research in the field of obtaining new prominent products exhibiting improved physiological and physicochemical characteristics. Therefore, different approaches to valuable bioactive compounds synthesis have been proposed. β-Galactosidase, for example, although mainly utilized as hydrolytic enzyme, proved to be a promising tool for these purposes. Namely, under the particular conditions, such as high lactose concentration, elevated temperatures and low water activities, reaction of galactose moiety transfer to free hydroxyl group of the alternative acceptor (e.g. different sugars, alcohols or aromatic compounds) can generate a wide range of potentially interesting products. Up to now, galacto-oligosaccharides and lactulose have attracted the most attention due to their inherent prebiotic properties. The goal of this study was to obtain a novel product sorbitol galactoside, using the similar reaction mechanism, namely transgalactosylation reaction catalyzed by β-galactosidase from Aspergillus oryzae. By using sugar alcohol (sorbitol) as alternative acceptor, a diverse mixture of potential prebiotics is produced, enabling its more favorable functional features. Nevertheless, an introduction of alternative acceptor into the reaction mixture contributed to the complexity of reaction scheme, since several potential reaction pathways were introduced. Therefore, the thorough optimization using response surface method (RSM), in order to get an insight into different parameter (lactose concentration, sorbitol to lactose molar ratio, enzyme concentration, NaCl concentration and reaction time) influences, as well as their mutual interactions on product yield and productivity, was performed. In view of product yield maximization, the obtained model predicted optimal lactose concentration 500 mM, the molar ratio of sobitol to lactose 9, enzyme concentration 0.76 mg/ml, concentration of NaCl 0.8M, and the reaction time 7h. From the aspect of productivity, the optimum substrate molar ratio was found to be 1, while the values for other factors coincide. In order to additionally, improve enzyme efficiency and enable its reuse and potential continual application, immobilization of β-galactosidase onto tailored silica nanoparticles was performed. These non-porous fumed silica nanoparticles (FNS)were chosen on the basis of their biocompatibility and non-toxicity, as well as their advantageous mechanical and hydrodinamical properties. However, in order to achieve better compatibility between enzymes and the carrier, modifications of the silica surface using amino functional organosilane (3-aminopropyltrimethoxysilane, APTMS) were made. Obtained support with amino functional groups (AFNS) enabled high enzyme loadings and, more importantly, extremely high expressed activities, approximately 230 mg proteins/g and 2100 IU/g, respectively. Moreover, this immobilized preparation showed high affinity towards sorbitol galactoside synthesis. Therefore, the findings of this study could provided a valuable contribution to the efficient production of physiologically active galactosides in immobilized enzyme reactors.

Keywords: β-galactosidase, immobilization, silica nanoparticles, transgalactosylation

Procedia PDF Downloads 269
96 The Impact of an Improved Strategic Partnership Programme on Organisational Performance and Growth of Firms in the Internet Protocol Television and Hybrid Fibre-Coaxial Broadband Industry

Authors: Collen T. Masilo, Brane Semolic, Pieter Steyn

Abstract:

The Internet Protocol Television (IPTV) and Hybrid Fibre-Coaxial (HFC) Broadband industrial sector landscape are rapidly changing and organisations within the industry need to stay competitive by exploring new business models so that they can be able to offer new services and products to customers. The business challenge in this industrial sector is meeting or exceeding high customer expectations across multiple content delivery modes. The increasing challenges in the IPTV and HFC broadband industrial sector encourage service providers to form strategic partnerships with key suppliers, marketing partners, advertisers, and technology partners. The need to form enterprise collaborative networks poses a challenge for any organisation in this sector, in selecting the right strategic partners who will ensure that the organisation’s services and products are marketed in new markets. Partners who will ensure that customers are efficiently supported by meeting and exceeding their expectations. Lastly, selecting cooperation partners who will represent the organisation in a positive manner, and contribute to improving the performance of the organisation. Companies in the IPTV and HFC broadband industrial sector tend to form informal partnerships with suppliers, vendors, system integrators and technology partners. Generally, partnerships are formed without thorough analysis of the real reason a company is forming collaborations, without proper evaluations of prospective partners using specific selection criteria, and with ineffective performance monitoring of partners to ensure that a firm gains real long term benefits from its partners and gains competitive advantage. Similar tendencies are illustrated in the research case study and are based on Skyline Communications, a global leader in end-to-end, multi-vendor network management and operational support systems (OSS) solutions. The organisation’s flagship product is the DataMiner network management platform used by many operators across multiple industries and can be referred to as a smart system that intelligently manages complex technology ecosystems for its customers in the IPTV and HFC broadband industry. The approach of the research is to develop the most efficient business model that can be deployed to improve a strategic partnership programme in order to significantly improve the performance and growth of organisations participating in a collaborative network in the IPTV and HFC broadband industrial sector. This involves proposing and implementing a new strategic partnership model and its main features within the industry which should bring about significant benefits for all involved companies to achieve value add and an optimal growth strategy. The proposed business model has been developed based on the research of existing relationships, value chains and business requirements in this industrial sector and validated in 'Skyline Communications'. The outputs of the business model have been demonstrated and evaluated in the research business case study the IPTV and HFC broadband service provider 'Skyline Communications'.

Keywords: growth, partnership, selection criteria, value chain

Procedia PDF Downloads 99
95 Political Communication in Twitter Interactions between Government, News Media and Citizens in Mexico

Authors: Jorge Cortés, Alejandra Martínez, Carlos Pérez, Anaid Simón

Abstract:

The presence of government, news media, and general citizenry in social media allows considering interactions between them as a form of political communication (i.e. the public exchange of contradictory discourses about politics). Twitter’s asymmetrical following model (users can follow, mention or reply to other users that do not follow them) could foster alternative democratic practices and have an impact on Mexican political culture, which has been marked by a lack of direct communication channels between these actors. The research aim is to assess Twitter’s role in political communication practices through the analysis of interaction dynamics between government, news media, and citizens by extracting and visualizing data from Twitter’s API to observe general behavior patterns. The hypothesis is that regardless the fact that Twitter’s features enable direct and horizontal interactions between actors, users repeat traditional dynamics of interaction, without taking full advantage of the possibilities of this medium. Through an interdisciplinary team including Communication Strategies, Information Design, and Interaction Systems, the activity on Twitter generated by the controversy over the presence of Uber in Mexico City was analysed; an issue of public interest, involving aspects such as public opinion, economic interests and a legal dimension. This research includes techniques from social network analysis (SNA), a methodological approach focused on the comprehension of the relationships between actors through the visual representation and measurement of network characteristics. The analysis of the Uber event comprised data extraction, data categorization, corpus construction, corpus visualization and analysis. On the recovery stage TAGS, a Google Sheet template, was used to extract tweets that included the hashtags #UberSeQueda and #UberSeVa, posts containing the string Uber and tweets directed to @uber_mx. Using scripts written in Python, the data was filtered, discarding tweets with no interaction (replies, retweets or mentions) and locations outside of México. Considerations regarding bots and the omission of anecdotal posts were also taken into account. The utility of graphs to observe interactions of political communication in general was confirmed by the analysis of visualizations generated with programs such as Gephi and NodeXL. However, some aspects require improvements to obtain more useful visual representations for this type of research. For example, link¬crossings complicates following the direction of an interaction forcing users to manipulate the graph to see it clearly. It was concluded that some practices prevalent in political communication in Mexico are replicated in Twitter. Media actors tend to group together instead of interact with others. The political system tends to tweet as an advertising strategy rather than to generate dialogue. However, some actors were identified as bridges establishing communication between the three spheres, generating a more democratic exercise and taking advantage of Twitter’s possibilities. Although interactions in Twitter could become an alternative to political communication, this potential depends on the intentions of the participants and to what extent they are aiming for collaborative and direct communications. Further research is needed to get a deeper understanding on the political behavior of Twitter users and the possibilities of SNA for its analysis.

Keywords: interaction, political communication, social network analysis, Twitter

Procedia PDF Downloads 195
94 A Digital Environment for Developing Mathematical Abilities in Children with Autism Spectrum Disorder

Authors: M. Isabel Santos, Ana Breda, Ana Margarida Almeida

Abstract:

Research on academic abilities of individuals with autism spectrum disorder (ASD) underlines the importance of mathematics interventions. Yet the proposal of digital applications for children and youth with ASD continues to attract little attention, namely, regarding the development of mathematical reasoning, being the use of the digital technologies an area of great interest for individuals with this disorder and its use is certainly a facilitative strategy in the development of their mathematical abilities. The use of digital technologies can be an effective way to create innovative learning opportunities to these students and to develop creative, personalized and constructive environments, where they can develop differentiated abilities. The children with ASD often respond well to learning activities involving information presented visually. In this context, we present the digital Learning Environment on Mathematics for Autistic children (LEMA) that was a research project conducive to a PhD in Multimedia in Education and was developed by the Thematic Line Geometrix, located in the Department of Mathematics, in a collaboration effort with DigiMedia Research Center, of the Department of Communication and Art (University of Aveiro, Portugal). LEMA is a digital mathematical learning environment which activities are dynamically adapted to the user’s profile, towards the development of mathematical abilities of children aged 6–12 years diagnosed with ASD. LEMA has already been evaluated with end-users (both students and teacher’s experts) and based on the analysis of the collected data readjustments were made, enabling the continuous improvement of the prototype, namely considering the integration of universal design for learning (UDL) approaches, which are of most importance in ASD, due to its heterogeneity. The learning strategies incorporated in LEMA are: (i) provide options to custom choice of math activities, according to user’s profile; (ii) integrates simple interfaces with few elements, presenting only the features and content needed for the ongoing task; (iii) uses a simple visual and textual language; (iv) uses of different types of feedbacks (auditory, visual, positive/negative reinforcement, hints with helpful instructions including math concept definitions, solved math activities using split and easier tasks and, finally, the use of videos/animations that show a solution to the proposed activity); (v) provides information in multiple representation, such as text, video, audio and image for better content and vocabulary understanding in order to stimulate, motivate and engage users to mathematical learning, also helping users to focus on content; (vi) avoids using elements that distract or interfere with focus and attention; (vii) provides clear instructions and orientation about tasks to ease the user understanding of the content and the content language, in order to stimulate, motivate and engage the user; and (viii) uses buttons, familiarly icons and contrast between font and background. Since these children may experience little sensory tolerance and may have an impaired motor skill, besides the user to have the possibility to interact with LEMA through the mouse (point and click with a single button), the user has the possibility to interact with LEMA through Kinect device (using simple gesture moves).

Keywords: autism spectrum disorder, digital technologies, inclusion, mathematical abilities, mathematical learning activities

Procedia PDF Downloads 95
93 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation

Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira

Abstract:

The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.

Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy

Procedia PDF Downloads 104
92 The Role of a Biphasic Implant Based on a Bioactive Silk Fibroin for Osteochondral Tissue Regeneration

Authors: Lizeth Fuentes-Mera, Vanessa Perez-Silos, Nidia K. Moncada-Saucedo, Alejandro Garcia-Ruiz, Alberto Camacho, Jorge Lara-Arias, Ivan Marino-Martinez, Victor Romero-Diaz, Adolfo Soto-Dominguez, Humberto Rodriguez-Rocha, Hang Lin, Victor Pena-Martinez

Abstract:

Biphasic scaffolds in cartilage tissue engineering have been designed to influence not only the recapitulation of the osteochondral architecture but also to take advantage of the healing ability of bone to promote the implant integration with the surrounding tissue and then bone restoration and cartilage regeneration. This study reports the development and characterization of a biphasic scaffold based on the assembly of a cartilage phase constituted by fibroin biofunctionalized with bovine cartilage matrix; cellularized with differentiated pre-chondrocytes from adipose tissue stem cells (autologous) and well attached to a bone phase (bone bovine decellularized) to mimic the structure of the nature of native tissue and to promote the cartilage regeneration in a model of joint damage in pigs. Biphasic scaffolds were assembled by fibroin crystallization with methanol. The histological and ultrastructural architectures were evaluated by optical and scanning electron microscopy respectively. Mechanical tests were conducted to evaluate Young's modulus of the implant. For the biological evaluation, pre-chondrocytes were loaded onto the scaffolds and cellular adhesion, proliferation, and gene expression analysis of cartilage extracellular matrix components was performed. The scaffolds that were cellularized and matured for 10 days were implanted into critical 3 mm in diameter and 9-mm in depth osteochondral defects in a porcine model (n=4). Three treatments were applied per knee: Group 1: monophasic cellular scaffold (MS) (single chondral phase), group 2: biphasic scaffold, cellularized only in the chondral phase (BS1), group 3: BS cellularized in both bone and chondral phases (BS2). Simultaneously, a control without treatment was evaluated. After 4 weeks of surgery, integration and regeneration tissues were analyzed by x-rays, histology and immunohistochemistry evaluation. The mechanical assessment showed that the acellular biphasic composites exhibited Young's modulus of 805.01 kPa similar to native cartilage (400-800 kPa). In vitro biological studies revealed the chondroinductive ability of the biphasic implant, evidenced by an increase in sulfated glycosaminoglycan (GAGs) and type II collagen, both secreted by the chondrocytes cultured on the scaffold during 28 days. No evidence of adverse or inflammatory reactions was observed in the in vivo trial; however, In group 1, the defects were not reconstructed. In group 2 and 3 a good integration of the implant with the surrounding tissue was observed. Defects in group 2 were fulfilled by hyaline cartilage and normal bone. Group 3 defects showed fibrous repair tissue. In conclusion; our findings demonstrated the efficacy of biphasic and bioactive scaffold based on silk fibroin, which entwined chondroinductive features and biomechanical capability with appropriate integration with the surrounding tissue, representing a promising alternative for osteochondral tissue-engineering applications.

Keywords: biphasic scaffold, extracellular cartilage matrix, silk fibroin, osteochondral tissue engineering

Procedia PDF Downloads 125
91 Municipalities as Enablers of Citizen-Led Urban Initiatives: Possibilities and Constraints

Authors: Rosa Nadine Danenberg

Abstract:

In recent years, bottom-up urban development has started growing as an alternative to conventional top-down planning. In large proportions, citizens and communities initiate small-scale interventions; suddenly seeming to form a trend. As a result, more and more cities are witnessing not only the growth of but also an interest in these initiatives, as they bear the potential to reshape urban spaces. Such alternative city-making efforts cause new dynamics in urban governance, with inevitable consequences for the controlled city planning and its administration. The emergence of enabling relationships between top-down and bottom-up actors signals an increasingly common urban practice. Various case studies show that an enabling relationship is possible, yet, how it can be optimally realized stays rather underexamined. Therefore, the seemingly growing worldwide phenomenon of ‘municipal bottom-up urban development’ necessitates an adequate governance structure. As such, the aim of this research is to contribute knowledge to how municipalities can enable citizen-led urban initiatives from a governance innovation perspective. Empirical case-study research in Stockholm and Istanbul, derived from interviews with founders of four citizen-led urban initiatives and one municipal representative in each city, provided valuable insights to possibilities and constraints for enabling practices. On the one hand, diverging outcomes emphasize the extreme oppositional features of both cases (Stockholm and Istanbul). Firstly, both cities’ characteristics are drastically different. Secondly, the ideologies and motifs for the initiatives to emerge vary widely. Thirdly, the major constraints for citizen-led urban initiatives to relate to the municipality are considerably different. Two types of municipality’s organizational structures produce different underlying mechanisms which demonstrate the constraints. The first municipal organizational structure is steered by bureaucracy (Stockholm). It produces an administrative division that brings up constraints such as the lack of responsibility, transparency and continuity by municipal representatives. The second structure is dominated by municipal politics and governmental hierarchy (Istanbul). It produces informality, lack of transparency and a fragmented civil society. In order to cope with the constraints produced by both types of organizational structures, the initiatives have adjusted their organization to the municipality’s underlying structures. On the other hand, this paper has in fact also come to a rather unifying conclusion. Interestingly, the suggested possibilities for an enabling relationship underline converging new urban governance arrangements. This could imply that for the two varying types of municipality’s organizational structures there is an accurate governance structure. Namely, the combination of a neighborhood council with a municipal guide, with allowance for the initiatives to adopt a politicizing attitude is found as coinciding. Especially its combination appears key to redeem varying constraints. A municipal guide steers the initiatives through bureaucratic struggles, is supported by coproduction methods, while it balances out municipal politics. Next, a neighborhood council, that is politically neutral and run by local citizens, can function as an umbrella for citizen-led urban initiatives. What is crucial is that it should cater for a more entangled relationship between municipalities and initiatives with enhanced involvement of the initiatives in decision-making processes and limited involvement of prevailing constraints pointed out in this research.

Keywords: bottom-up urban development, governance innovation, Istanbul, Stockholm

Procedia PDF Downloads 193
90 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 316
89 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time

Authors: Deepak Loura

Abstract:

The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.

Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture

Procedia PDF Downloads 54
88 Selective Immobilization of Fructosyltransferase onto Glutaraldehyde Modified Support and Its Application in the Production of Fructo-Oligosaccharides

Authors: Milica B. Veljković, Milica B. Simović, Marija M. Ćorović, Ana D. Milivojević, Anja I. Petrov, Katarina M. Banjanac, Dejan I. Bezbradica

Abstract:

In recent decades, the scientific community has recognized the growing importance of prebiotics, and therefore, numerous studies are focused on their economic production due to their low presence in natural resources. It has been confirmed that prebiotics is a source of energy for probiotics in the gastrointestinal tract (GIT) and enable their proliferation, consequently leading to the normal functioning of the intestinal microbiota. Also, products of their fermentation are short-chain fatty acids (SCFA), which play a key role in maintaining and improving the health not only of the GIT but also of the whole organism. Among several confirmed prebiotics, fructooligosaccharides (FOS) are considered interesting candidates for use in a wide range of products in the food industry. They are characterized as low-calorie and non-cariogenic substances that represent an adequate sugar substitute and can be considered suitable for use in products intended for diabetics. The subject of this research will be the production of FOS by transforming sucrose using a fructosyltransferase (FTase) present in commercial preparation Pectinex® Ultra SP-L, with special emphasis on the development of adequate FTase immobilization method that would enable selective isolation of the enzyme responsible for the synthesis of FOS from the complex enzymatic mixture. This would lead to considerable enzyme purification and allow its direct incorporation into different sucrose-based products without the fear that the action of the other hydrolytic enzymes may adversely affect the products' functional characteristics. Accordingly, the possibility of selective immobilization of the enzyme using support with primary amino groups, Purolite® A109, which was previously activated and modified using glutaraldehyde (GA), was investigated. In the initial phase of the research, the effects of individual immobilization parameters such as pH, enzyme concentration, and immobilization time were investigated to optimize the process using support chemically activated with 15% and 0.5% GA to form dimers and monomers, respectively. It was determined that highly active immobilized preparations (371.8 IU/g of support - dimer and 213.8 IU/g of support – monomer) were achieved under acidic conditions (pH 4) provided that an enzyme concentration was 50 mg/g of support after 7 h and 3 h, respectively. Bearing in mind the obtained results of the expressed activity, it is noticeable that the formation of dimers showed higher reactivity compared to the form of monomers. Also, in the case of support modification using 15% GA, the value of the ratio of FTase and pectinase (as dominant enzyme mixture component) activity immobilization yields was 16.45, indicating the high feasibility of selective immobilization of FTase on modified polystyrene resin. After obtaining immobilized preparations of satisfactory features, they were tested in a reaction of FOS synthesis under determined optimal conditions. The maximum FOS yields of approximately 50% of total carbohydrates in the reaction mixture were recorded after 21 h. Finally, it can be concluded that the examined immobilization method yielded highly active, stable and, more importantly, refined enzyme preparation that can be further utilized on a larger scale for the development of continual processes for FOS synthesis, as well as for modification of different sucrose-based mediums.

Keywords: chemical modification, fructooligosaccharides, glutaraldehyde, immobilization of fructosyltransferase

Procedia PDF Downloads 150
87 Accurate Energy Assessment Technique for Mine-Water District Heat Network

Authors: B. Philip, J. Littlewood, R. Radford, N. Evans, T. Whyman, D. P. Jones

Abstract:

UK buildings and energy infrastructures are heavily dependent on natural gas, a large proportion of which is used for domestic space heating. However, approximately half of the gas consumed in the UK is imported. Improving energy security and reducing carbon emissions are major government drivers for reducing gas dependency. In order to do so there needs to be a wholesale shift in the energy provision to householders without impacting on thermal comfort levels, convenience or cost of supply to the end user. Heat pumps are seen as a potential alternative in modern well insulated homes, however, can the same be said of older homes? A large proportion of housing stock in Britain was built prior to 1919. The age of the buildings bears testimony to the quality of construction; however, their thermal performance falls far below the minimum currently set by UK building standards. In recent years significant sums of money have been invested to improve energy efficiency and combat fuel poverty in some of the most deprived areas of Wales. Increasing energy efficiency of older properties remains a significant challenge, which cannot be achieved through insulation and air-tightness interventions alone, particularly when alterations to historically important architectural features of the building are not permitted. This paper investigates the energy demand of pre-1919 dwellings in a former Welsh mining village, the feasibility of meeting that demand using water from the disused mine workings to supply a district heat network and potential barriers to success of the scheme. The use of renewable solar energy generation and storage technologies, both thermal and electrical, to reduce the load and offset increased electricity demand, are considered. A wholistic surveying approach to provide a more accurate assessment of total household heat demand is proposed. Several surveying techniques, including condition surveys, air permeability, heat loss calculations, and thermography were employed to provide a clear picture of energy demand. Additional insulation can bring unforeseen consequences which are detrimental to the fabric of the building, potentially leading to accelerated dilapidation of the asset being ‘protected’. Increasing ventilation should be considered in parallel, to compensate for the associated reduction in uncontrolled infiltration. The effectiveness of thermal performance improvements are demonstrated and the detrimental effects of incorrect material choice and poor installation are highlighted. The findings show estimated heat demand to be in close correlation to household energy bills. Major areas of heat loss were identified such that improvements to building thermal performance could be targeted. The findings demonstrate that the use of heat pumps in older buildings is viable, provided sufficient improvement to thermal performance is possible. Addition of passive solar thermal and photovoltaic generation can help reduce the load and running cost for the householder. The results were used to predict future heat demand following energy efficiency improvements, thereby informing the size of heat pumps required.

Keywords: heat demand, heat pump, renewable energy, retrofit

Procedia PDF Downloads 74
86 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example

Authors: Alena Nesterenko, Svetlana Petrikova

Abstract:

Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.

Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation

Procedia PDF Downloads 187
85 Blood Thicker Than Water: A Case Report on Familial Ovarian Cancer

Authors: Joanna Marie A. Paulino-Morente, Vaneza Valentina L. Penolio, Grace Sabado

Abstract:

Ovarian cancer is extremely hard to diagnose in its early stages, and those afflicted at the time of diagnosis are typically asymptomatic and in the late stages of the disease, with metastasis to other organs. Ovarian cancers often occur sporadically, with only 5% associated with hereditary mutations. Mutations in the BRCA1 and BRCA2 tumor suppressor genes have been found to be responsible for the majority of hereditary ovarian cancers. One type of ovarian tumor is Malignant Mixed Mullerian Tumor (MMMT), which is a very rare and aggressive type, accounting for only 1% of all ovarian cancers. Reported is a case of a 43-year-old G3P3 (3003), who came into our institution due to a 2-month history of difficulty of breathing. Family history reveals that her eldest and younger sisters both died of ovarian malignancy, with her younger sister having a histopathology report of endometrioid ovarian carcinoma, left ovary stage IIIb. She still has 2 asymptomatic sisters. Physical examination pointed to pleural effusion of right lung, and presence of bilateral ovarian new growth, which had a Sassone score of 13. Admitting Diagnosis was G3P3 (3003), Ovarian New Growth, bilateral, Malignant; Pleural effusion secondary to malignancy. BRCA was requested to establish a hereditary mutation; however, the patient had no funds. Once the patient was stabilized, TAHBSO with surgical staging was performed. Intraoperatively, the pelvic cavity was occupied by firm, irregularly shaped ovaries, with a colorectal metastasis. Microscopic sections from both ovaries and the colorectal metastasis had pleomorphic tumor cells lined by cuboidal to columnar epithelium exhibiting glandular complexity, displaying nuclear atypia and increased nuclear-cytoplasmic ratio, which are infiltrating the stroma, consistent with the features of Malignant Mixed Mullerian Tumor, since MMMT is composed histologically of malignant epithelial and sarcomatous elements. In conclusion, discussed is the clinic-pathological feature of a patient with primary ovarian Malignant Mixed Mullerian Tumor, a rare malignancy comprising only 1% of all ovarian neoplasms. Also, by understanding the hereditary ovarian cancer syndromes and its relation to this patient, it cannot be overemphasized that a comprehensive family history is really fundamental for early diagnosis. The familial association of the disease, given that the patient has two sisters who were diagnosed with an advanced stage of ovarian cancer and succumbed to the disease at a much earlier age than what is reported in the general population, points to a possible hereditary syndrome which occurs in only 5% of ovarian neoplasms. In a low-resource setting, being in a third world country, the following will be recommended for monitoring and/or screening women who are at high risk for developing ovarian cancer, such as the remaining sisters of the patient: 1) Physical examination focusing on the breast, abdomen, and rectal area every 6 months. 2) Transvaginal sonography every 6 months. 3) Mammography annually. 4) CA125 for postmenopausal women. 5) Genetic testing for BRCA1 and BRCA2 will be reserved for those who are financially capable.

Keywords: BRCA, hereditary breast-ovarian cancer syndrome, malignant mixed mullerian tumor, ovarian cancer

Procedia PDF Downloads 264
84 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base

Authors: Joanna Wielochowska, Aneta Wielochowska

Abstract:

Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.

Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia

Procedia PDF Downloads 103
83 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 208
82 Participation of Titanium Influencing the Petrological Assemblage of Mafic Dyke: Salem, South India

Authors: Ayoti Banerjee, Meenakshi Banerjee

Abstract:

The study of metamorphic reaction textures is important in contributing to our understanding of the evolution of metamorphic terranes. Where preserved, they provide information on changes in the P-T conditions during the metamorphic history of the rock, and thus allow us to speculate on the P-T-t evolution of the terrane. Mafic dykes have attracted the attention of petrologists because they act as window to mantle. This rock represents a mafic dyke of doleritic composition. It is fine to medium grained in which clinopyroxene are enclosed by the lath shaped plagioclase grains to form spectacular ophitic texture. At places, sub ophitic texture was also observed. Grains of pyroxene and plagioclase show very less deformation typically plagioclase showing deformed lamella along with plagioclase-clinopyroxene-phyric granoblastic fabric within a groundmass of feldspar microphenocrysts and Fe–Ti oxides. Both normal and reverse zoning were noted in the plagioclase laths. The clinopyroxene grains contain exsolved phases such as orthopyroxene, plagioclase, magnetite, ilmenite along the cleavage traces and the orthopyroxene lamella form granules in the periphery of the clinopyroxene grains. Garnet corona also develops preferentially around plagioclase at the contact of clinopyroxene, ilmenite or magnetite. Tiny quartz and K-fs grains showed symplectic intergrowth with garnet at a few places. The product quartz formed along with garnet rims the coronal garnet and the reacting clinopyroxene. Thin amphibole corona formed along the periphery of deformed plagioclase and clinopyroxene occur as patches over the magmatic minerals. The amphibole coronas cannot be assigned to a late magmatic stage and are interpreted as reactive being restricted to the contact between clinopyroxene and plagioclase, thus postdating the crystallization of both. The amphibole and garnet do not share grain boundary in the entire rock and is thus pointing towards simultaneous crystallization. Olivine is absent. Spectacular myrmekitic growth of orthoclase and quartz rimming the plagioclase is consistent with the potash metasomatic effects that is also found in other rocks of this region. These textural features are consistent with a phase of fluid induced metamorphism (retrogression). But the appearance of coronal garnet and amphibole exclusive of each other reflects the participation if Ti as the prime reason. Presence of Ti as a reactant phase is a must for amphibole forming reactions whereas it is not so in case of garnet forming reactions although the reactants are the same plagioclase and clinopyroxene in both cases. These findings are well validated by petrographical and textural analysis. In order to obtain balanced chemical reactions that explain formation of amphibole and garnet in the mafic dyke rocks a matrix operation technique called Singular Value Decomposition (SVD) was adopted utilizing the measured chemical compositions of the minerals. The computer program C-Space was used for this purpose and the required compositional matrix. Data fed to C-Space was after doing cation-calculation of the oxide percentages obtained from EPMA analysis. The Garnet-Clinopyroxene geothermometer yielded a temperature of 650 degrees Celsius. The Garnet-Clinopyroxene-Plagioclase geobarometer and Al-in amphibole yielded roughly 7.5 kbar pressure.

Keywords: corona, dolerite, geothermometer, metasomatism, metamorphic reaction texture, retrogression

Procedia PDF Downloads 243
81 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos

Authors: Thilini M. Yatanwala

Abstract:

CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.

Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection

Procedia PDF Downloads 153
80 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study

Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier

Abstract:

An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.

Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house

Procedia PDF Downloads 393
79 Somatic Delusional Disorder Subsequent to Phantogeusia: A Case Report

Authors: Pedro Felgueiras, Ana Miguel, Nélson Almeida, Raquel Silva

Abstract:

Objective: Through the study of a clinical case of delusional somatic disorder secondary to phantogeusia, we aim to highlight the importance of considering psychosomatic conditions in differential diagnosis, as well as to emphasize the complexity of its comprehension, treatment, and respective impact on patients’ functioning. Methods: Bearing this in mind, we conducted a critical analysis of a case series based on patient observations, clinical data, and complementary diagnostic methods, as well as a non-systematic review of the literature on the subject. Results: A 61-year-old female patient with no history of psychiatric conditions. Family psychiatric history of mood disorder (depression), with psychotic features found in her mother. Medical history of many comorbidities affecting different organ systems (endocrine, gastrointestinal, genitourinary, ophthalmological). Documented neuroticism traits of personality. The patient’s family described a persistent concern about several physical symptoms across her life, with a continuous effort to obtain explanations about any sensation out of her normal perception. Since being subjected to endoscopy in 2018, she started complaints of persistent phantogeusia (acid taste) and developed excessive thoughts, feelings, and behaviors associated with this somatic symptom. The patient was evaluated by several medical specialties, and an extensive panel of medical exams was carried out, excluding any disease. Besides all the investigation and with no evidence of disease signs, acute anxiety, time, and energy dispended to this symptom culminated in severe psychosocial impairment. The patient was admitted to a psychiatric ward for investigation and treatment of this clinical picture, leading to the diagnosis of the delusional somatic disorder. In order to exclude the acute organic etiology of this psychotic disorder, an analytic panel was carried out with no abnormal results. In the context of a psychotic clinical picture, a CT scan was performed, which revealed a right cortical vascular lesion. Neuropsychological evaluation was made, with the description of cognitive functioning being globally normative. During treatment with an antipsychotic (pimozide), a complete remission of the somatic delusion was associated with the disappearance of gustative perception disturbance. In follow-up, a relapse of gustative sensation was documented, and her thoughts and speech were dominated by concerns about multiple somatic symptoms. Conclusion: In terms of abnormal bodily sensations, the oral cavity is one of the frequent sites of delusional disorder. Patients with these gustatory perception distortions complain about unusual sensations without corresponding abnormal findings in the oral area. Its pathophysiology has not been fully elucidated yet. In terms of its comprehensive psychopathology, this case was hypothesized as a paranoid development of a delusional somatic disorder triggered by a post-invasive procedure phantogeusia (which is described as a possible side effect of an endoscopy) in a patient with an anankastic personality. This case presents interesting psychopathology, reinforcing the complexity of psychosomatic disorders in terms of their etiopathogenesis, clinical treatment, and long-term prognosis.

Keywords: psychosomatics, delusional somatic disorder, phantogeusia, paranoid development

Procedia PDF Downloads 98
78 Development of One-Pot Sequential Cyclizations and Photocatalyzed Decarboxylative Radical Cyclization: Application Towards Aspidospermatan Alkaloids

Authors: Guillaume Bélanger, Jean-Philippe Fontaine, Clémence Hauduc

Abstract:

There is an undeniable thirst from organic chemists and from the pharmaceutical industry to access complex alkaloids with short syntheses. While medicinal chemists are interested in the fascinating wide range of biological properties of alkaloids, synthetic chemists are rather interested in finding new routes to access these challenging natural products of often low availability from nature. To synthesize complex polycyclic cores of natural products, reaction cascades or sequences performed one-pot offer a neat advantage over classical methods for their rapid increase in molecular complexity in a single operation. In counterpart, reaction cascades need to be run on substrates bearing all the required functional groups necessary for the key cyclizations. Chemoselectivity is thus a major issue associated with such a strategy, in addition to diastereocontrol and regiocontrol for the overall transformation. In the pursuit of synthetic efficiency, our research group developed an innovative one-pot transformation of linear substrates into bi- and tricyclic adducts applied to the construction of Aspidospermatan-type alkaloids. The latter is a rich class of indole alkaloids bearing a unique bridged azatricyclic core. Despite many efforts toward the synthesis of members of this family, efficient and versatile synthetic routes are still coveted. Indeed, very short, non-racemic approaches are rather scarce: for example, in the cases of aspidospermidine and aspidospermine, syntheses are all fifteen steps and over. We envisaged a unified approach to access several members of the Aspidospermatan alkaloids family. The key sequence features a highly chemoselective formamide activation that triggers a Vilsmeier-Haack cyclization, followed by an azomethine ylide generation and intramolecular cycloaddition. Despite the high density and variety of functional groups on the substrates (electron-rich and electron-poor alkenes, nitrile, amide, ester, enol ether), the sequence generated three new carbon-carbon bonds and three rings in a single operation with good yield and high chemoselectivity. A detailed study of amide, nucleophile, and dipolarophile variations to finally get to the successful combination required for the key transformation will be presented. To complete the indoline fragment of the natural products, we developed an original approach. Indeed, all reported routes to Aspidospermatan alkaloids introduce the indoline or indole early in the synthesis. In our work, the indoline needs to be installed on the azatricyclic core after the key cyclization sequence. As a result, typical Fischer indolization is not suited since this reaction is known to fail on such substrates. We thus envisaged a unique photocatalyzed decarboxylative radical cyclization. The development of this reaction as well as the scope and limitations of the methodology, will also be presented. The original Vilsmeier-Haack and azomethine ylide cyclization sequence as well as the new photocatalyzed decarboxylative radical cyclization will undoubtedly open access to new routes toward polycyclic indole alkaloids and derivatives of pharmaceutical interest in general.

Keywords: Aspidospermatan alkaloids, azomethine ylide cycloaddition, decarboxylative radical cyclization, indole and indoline synthesis, one-pot sequential cyclizations, photocatalysis, Vilsmeier-Haack Cyclization

Procedia PDF Downloads 57
77 Pre-conditioning and Hot Water Sanitization of Reverse Osmosis Membrane for Medical Water Production

Authors: Supriyo Das, Elbir Jove, Ajay Singh, Sophie Corbet, Noel Carr, Martin Deetz

Abstract:

Water is a critical commodity in the healthcare and medical field. The utility of medical-grade water spans from washing surgical equipment, drug preparation to the key element of life-saving therapy such as hydrotherapy and hemodialysis for patients. A properly treated medical water reduces the bioburden load and mitigates the risk of infection, ensuring patient safety. However, any compromised condition during the production of medical-grade water can create a favorable environment for microbial growth putting patient safety at high risk. Therefore, proper upstream treatment of the medical water is essential before its application in healthcare, pharma and medical space. Reverse Osmosis (RO) is one of the most preferred treatments within healthcare industries and is recommended by all International Pharmacopeias to achieve the quality level demanded by global regulatory bodies. The RO process can remove up to 99.5% of constituents from feed water sources, eliminating bacteria, proteins and particles sizes of 100 Dalton and above. The combination of RO with other downstream water treatment technologies such as Electrodeionization and Ultrafiltration meet the quality requirements of various pharmacopeia monographs to produce highly purified water or water for injection for medical use. In the reverse osmosis process, the water from a liquid with a high concentration of dissolved solids is forced to flow through an especially engineered semi-permeable membrane to the low concentration side, resulting in high-quality grade water. However, these specially engineered RO membranes need to be sanitized either chemically or at high temperatures at regular intervals to keep the bio-burden at the minimum required level. In this paper, we talk about Dupont´s FilmTec Heat Sanitizable Reverse Osmosis membrane (HSRO) for the production of medical-grade water. An HSRO element must be pre-conditioned prior to initial use by exposure to hot water (80°C-85°C) for its stable performance and to meet the manufacturer’s specifications. Without pre-conditioning, the membrane will show variations in feed pressure operations and salt rejection. The paper will discuss the critical variables of pre-conditioning steps that can affect the overall performance of the HSRO membrane and demonstrate the data to support the need for pre-conditioning of HSRO elements. Our preliminary data suggests that there can be up to 35 % reduction in flow due to initial heat treatment, which also positively affects the increase in salt rejection. The paper will go into detail about the fundamental understanding of the performance change of HSRO after the pre-conditioning step and its effect on the quality of medical water produced. The paper will also discuss another critical point, “regular hot water sanitization” of these HSRO membranes. Regular hot water sanitization (at 80°C-85°C) is necessary to keep the membrane bioburden free; however, it can negatively impact the performance of the membrane over time. We will demonstrate several data points on hot water sanitization using FilmTec HSRO elements and challenge its robustness to produce quality medical water. The last part of this paper will discuss the construction details of the FilmTec HSRO membrane and features that make it suitable to pre-condition and sanitize at high temperatures.

Keywords: heat sanitizable reverse osmosis, HSRO, medical water, hemodialysis water, water for Injection, pre-conditioning, heat sanitization

Procedia PDF Downloads 183
76 Optical-Based Lane-Assist System for Rowing Boats

Authors: Stephen Tullis, M. David DiDonato, Hong Sung Park

Abstract:

Rowing boats (shells) are often steered by a small rudder operated by one of the backward-facing rowers; the attention required of that athlete then slightly decreases the power that that athlete can provide. Reducing the steering distraction would then increase the overall boat speed. Races are straight 2000 m courses with each boat in a 13.5 m wide lane marked by small (~15 cm) widely-spaced (~10 m) buoys, and the boat trajectory is affected by both cross-currents and winds. An optical buoy recognition and tracking system has been developed that provides the boat’s location and orientation with respect to the lane edges. This information is provided to the steering athlete as either: a simple overlay on a video display, or fed to a simplified autopilot system giving steering directions to the athlete or directly controlling the rudder. The system is then effectively a “lane-assist” device but with small, widely-spaced lane markers viewed from a very shallow angle due to constraints on camera height. The image is captured with a lightweight 1080p webcam, and most of the image analysis is done in OpenCV. The colour RGB-image is converted to a grayscale using the difference of the red and blue channels, which provides good contrast between the red/yellow buoys and the water, sky, land background and white reflections and noise. Buoy detection is done with thresholding within a tight mask applied to the image. Robust linear regression using Tukey’s biweight estimator of the previously detected buoy locations is used to develop the mask; this avoids the false detection of noise such as waves (reflections) and, in particular, buoys in other lanes. The robust regression also provides the current lane edges in the camera frame that are used to calculate the displacement of the boat from the lane centre (lane location), and its yaw angle. The interception of the detected lane edges provides a lane vanishing point, and yaw angle can be calculated simply based on the displacement of this vanishing point from the camera axis and the image plane distance. Lane location is simply based on the lateral displacement of the vanishing point from any horizontal cut through the lane edges. The boat lane position and yaw are currently fed what is essentially a stripped down marine auto-pilot system. Currently, only the lane location is used in a PID controller of a rudder actuator with integrator anti-windup to deal with saturation of the rudder angle. Low Kp and Kd values decrease unnecessarily fast return to lane centrelines and response to noise, and limiters can be used to avoid lane departure and disqualification. Yaw is not used as a control input, as cross-winds and currents can cause a straight course with considerable yaw or crab angle. Mapping of the controller with rudder angle “overall effectiveness” has not been finalized - very large rudder angles stall and have decreased turning moments, but at less extreme angles the increased rudder drag slows the boat and upsets boat balance. The full system has many features similar to automotive lane-assist systems, but with the added constraints of the lane markers, camera positioning, control response and noise increasing the challenge.

Keywords: auto-pilot, lane-assist, marine, optical, rowing

Procedia PDF Downloads 105
75 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 172
74 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 139
73 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 104
72 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies

Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König

Abstract:

Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.

Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition

Procedia PDF Downloads 233
71 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 52
70 Nanoscale Photo-Orientation of Azo-Dyes in Glassy Environments Using Polarized Optical Near-Field

Authors: S. S. Kharintsev, E. A. Chernykh, S. K. Saikin, A. I. Fishman, S. G. Kazarian

Abstract:

Recent advances in improving information storage performance are inseparably linked with circumvention of fundamental constraints such as the supermagnetic limit in heat assisted magnetic recording, charge loss tolerance in solid-state memory and the Abbe’s diffraction limit in optical storage. A substantial breakthrough in the development of nonvolatile storage devices with dimensional scaling has been achieved due to phase-change chalcogenide memory, which nowadays, meets the market needs to the greatest advantage. A further progress is aimed at the development of versatile nonvolatile high-speed memory combining potentials of random access memory and archive storage. The well-established properties of light at the nanoscale empower us to use them for recording optical information with ultrahigh density scaled down to a single molecule, which is the size of a pit. Indeed, diffraction-limited optics is able to record as much information as ~1 Gb/in2. Nonlinear optical effects, for example, two-photon fluorescence recording, allows one to decrease the extent of the pit even more, which results in the recording density up to ~100 Gb/in2. Going beyond the diffraction limit, due to the sub-wavelength confinement of light, pushes the pit size down to a single chromophore, which is, on average, of ~1 nm in length. Thus, the memory capacity can be increased up to the theoretical limit of 1 Pb/in2. Moreover, the field confinement provides faster recording and readout operations due to the enhanced light-matter interaction. This, in turn, leads to the miniaturization of optical devices and the decrease of energy supply down to ~1 μW/cm². Intrinsic features of light such as multimode, mixed polarization and angular momentum in addition to the underlying optical and holographic tools for writing/reading, enriches the storage and encryption of optical information. In particular, the finite extent of the near-field penetration, falling into a range of 50-100 nm, gives the possibility to perform 3D volume (layer-to-layer) recording/readout of optical information. In this study, we demonstrate a comprehensive evidence of isotropic-to-homeotropic phase transition of the azobenzene-functionalized polymer thin film exposed to light and dc electric field using near-field optical microscopy and scanning capacitance microscopy. We unravel a near-field Raman dichroism of a sub-10 nm thick epoxy-based side-chain azo-polymer films with polarization-controlled tip-enhanced Raman scattering. In our study, orientation of azo-chromophores is controlled with a bias voltage gold tip rather than light polarization. Isotropic in-plane and homeotropic out-of-plane arrangement of azo-chromophores in glassy environment can be distinguished with transverse and longitudinal optical near-fields. We demonstrate that both phases are unambiguously visualized by 2D mapping their local dielectric properties with scanning capacity microscopy. The stability of the polar homeotropic phase is strongly sensitive to the thickness of the thin film. We make an analysis of α-transition of the azo-polymer by detecting a temperature-dependent phase jump of an AFM cantilever when passing through the glass temperature. Overall, we anticipate further improvements in optical storage performance, which approaches to a single molecule level.

Keywords: optical memory, azo-dye, near-field, tip-enhanced Raman scattering

Procedia PDF Downloads 160
69 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence

Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy

Abstract:

Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.

Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows

Procedia PDF Downloads 118