Search results for: message aggregation
80 Organic Permeation Properties of Hydrophobic Silica Membranes with Different Functional Groups
Authors: Sadao Araki, Daisuke Gondo, Satoshi Imasaka, Hideki Yamamoto
Abstract:
The separation of organic compounds from aqueous solutions is a key technology for recycling valuable organic compounds and for the treatment of wastewater. The wastewater from chemical plants often contains organic compounds such as ethyl acetate (EA), methylethyl ketone (MEK) and isopropyl alcohol (IPA). In this study, we prepared hydrophobic silica membranes by a sol-gel method. We used phenyltrimethoxysilane (PhTMS), ethyltrimethoxysilan (ETMS), Propyltrimethoxysilane (PrTMS), N-butyltrimethoxysilane (BTMS), N-Hexyltrimethoxysilane (HTMS) as silica sources to introduce each functional groups on the membrane surface. Cetyltrimethyl ammonium bromide (CTAB) was used as a molecular template to create suitable pore that enable the permeation of organic compounds. These membranes with five different functional groups were characterized by SEM, FT-IR, and permporometry. Thicknesses and pore diameters of silica layer for all membrane were about 1.0 μm and about 1 nm, respectively. In other words, functional groups had an insignificant effect on the membrane thicknesses and the formation of the pore by CTAB. We confirmed the effect of functional groups on the flux and separation factor for ethyl acetate (EA), methyl ethyl ketone, acetone and 1-butanol (1-BtOH) /water mixtures. All membranes showed a high flux for ethyl acetate compared with other compounds. In particular, the hydrophobic silica membrane prepared by using BTMS showed 0.75 kg m-2 h-1 of flux for EA. For all membranes, the fluxes of organic compounds showed the large values in the order corresponding to EA > MEK > acetone > 1-BtOH. On the other hand, carbon chain length of functional groups among ETMS, PrTMS, BTMS, PrTMS and HTMS did not have a major effect on the organic flux. Although we confirmed the relationship between organic fluxes and organic molecular diameters or fugacity of organic compounds, these factors had a low correlation with organic fluxes. It is considered that these factors affect the diffusivity. Generally, permeation through membranes is based on the diffusivity and solubility. Therefore, it is deemed that organic fluxes through these hydrophobic membranes are strongly influenced by solubility. We tried to estimate the organic fluxes by Hansen solubility parameter (HSP). HSP, which is based on the cohesion energy per molar volume and is composed of dispersion forces (δd), intermolecular dipole interactions (δp), and hydrogen-bonding interactions (δh), has recently attracted attention as a means for evaluating the resolution and aggregation behavior. Evaluation of solubility for two substances can be represented by using the Ra [(MPa)1/2] value, meaning the distance of HSPs for both of substances. A smaller Ra value means a higher solubility for each substance. On the other hand, it can be estimated that the substances with large Ra value show low solubility. We established the correlation equation, which was based on Ra, of organic flux at low concentrations of organic compounds and at 295-325 K.Keywords: hydrophobic, membrane, Hansen solubility parameter, functional group
Procedia PDF Downloads 37979 Identification of Information War in Lithuania
Authors: Vitalijus Leibenka
Abstract:
After 2014 the world of Russia’s actions in annexing Crimea has seen a hybrid war that has helped Russia achieve its goals. The world and NATO nations have pointed out that hybrid action can help achieve not only military but also economic and political goals. One of the weapons of action in hybrid warfare is information warfare tools, the use of which helps to carry out actions in the context of hybrid warfare as a whole. In addition, information war tools can be used alone, over time and for long-term purposes. Although forms of information war, such as propaganda and disinformation, have been used in the past, in old conflicts and wars, new forms of information war have emerged as a result of technological development, making the dissemination of information faster and more efficient. The world understands that information is becoming a weapon, but not everyone understands that both information war and information warfare differ in their essence and full content. In addition, the damage and impact of the use of information war, which may have worse consequences than a brief military conflict, is underestimated. Lithuania is also facing various interpretations of the information war. Some believe that the information attack is an information war and the understanding of the information war is limited to a false message in the press. Others, however, deepen and explain the essence of the information war. Society has formed in such a way that not all people are able to assess the threats of information war, to separate information war from information attack. Recently, the Lithuanian government has been taking measures in the context of the information war, making decisions that allow the development of the activities of the state and state institutions in order to create defense mechanisms in the information war. However, this is happening rather slowly and incompletely. Every military conflict, related to Lithuania in one way or another, forces Lithuanian politicians to take up the theme of information warfare again. As a result, a national cyber security center is being set up, and Russian channels spreading lies are banned. However, there is no consistent development and continuous improvement of action against information threats. Although a sufficiently influential part of society (not a political part) helps to stop the spread of obscure information by creating social projects such as “Demaskuok” and “Laikykis ten su Andriumi tapinu”, it goes without saying that it will not become a key tool in the fight against information threats. Therefore, in order to achieve clean dissemination of information in Lithuania, full-fledged and substantial political decisions are necessary, the adoption of which would change the public perception of the information war, its damage, impact and actions that would allow to combat the spread. Political decisions should cover the educational, military, economic and political areas, which are one of the main and most important in the state, which would allow to fundamentally change the situation against the background of information war.Keywords: information war, information warfare, hybrid war, hybrid warfare, NATO, Lithuania, Russia
Procedia PDF Downloads 6378 Sub-Optimum Safety Performance of a Construction Project: A Multilevel Exploration
Authors: Tas Yong Koh, Steve Rowlinson, Yuzhong Shen
Abstract:
In construction safety management, safety climate has long been linked to workers' safety behaviors and performance. For this reason, safety climate concept and tools have been used as heuristics to diagnose a range of safety-related issues by some progressive contractors in Hong Kong and elsewhere. However, as a diagnostic tool, safety climate tends to treat the different components of the climate construct in a linear fashion. Safety management in construction projects, in reality, is a multi-faceted and multilevel phenomenon that resembles a complex system. Hence, understanding safety management in construction projects requires not only the understanding of safety climate but also the organizational-systemic nature of the phenomenon. Our involvement, diagnoses, and interpretations of a range of safety climate-related issues which culminated in the project’s sub-optimum safety performance in an infrastructure construction project have brought about such revelation. In this study, a range of data types had been collected from various hierarchies of the project site organization. These include the frontline workers and supervisors from the main and sub-contractors, and the client supervisory personnel. Data collection was performed through the administration of safety climate questionnaire, interviews, observation, and document study. The findings collectively indicate that what had emerged in parallel of the seemingly linear climate-based exploration is the exposition of the organization-systemic nature of the phenomenon. The results indicate the negative impacts of climate perceptions mismatch, insufficient work planning, and risk management, mixed safety leadership, workforce negative attributes, lapsed safety enforcement and resources shortages collectively give rise to the project sub-optimum safety performance. From the dynamic causation and multilevel perspective, the analyses show that the individual, group, and organizational levels issues are interrelated and these interrelationships are linked to negative safety climate. Hence the adoption of both perspectives has enabled a fuller understanding of the phenomenon of safety management that point to the need for an organizational-systemic intervention strategy. The core message points to the fact that intervention at an individual level will only meet with limited success if the risks embedded in the higher levels in group and project organization are not addressed. The findings can be used to guide the effective development of safety infrastructure by linking different levels of systems in a construction project organization.Keywords: construction safety management, dynamic causation, multilevel analysis, safety climate
Procedia PDF Downloads 17677 The Problem of Suffering: Job, The Servant and Prophet of God
Authors: Barbara Pemberton
Abstract:
Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.Keywords: suffering, Job, Qur'an, tanakh
Procedia PDF Downloads 18776 The Effective Use of the Network in the Distributed Storage
Authors: Mamouni Mohammed Dhiya Eddine
Abstract:
This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface
Procedia PDF Downloads 22275 Subjectivity in Miracle Aesthetic Clinic Ambient Media Advertisement
Authors: Wegig Muwonugroho
Abstract:
Subjectivity in advertisement is a ‘power’ possessed by advertisements to construct trend, concept, truth, and ideology through subconscious mind. Advertisements, in performing their functions as message conveyors, use such visual representation to inspire what’s ideal to the people. Ambient media is advertising medium making the best use of the environment where the advertisement is located. Miracle Aesthetic Clinic (Miracle) popularizes the visual representation of its ambient media advertisement through the omission of face-image of both female mannequins that function as its ambient media models. Usually, the face of a model in advertisement is an image commodity having selling values; however, the faces of ambient media models in Miracle advertisement campaign are suppressed over the table and wall. This face concealing aspect creates not only a paradox of subjectivity but also plurality of meaning. This research applies critical discourse analysis method to analyze subjectivity in obtaining the insight of ambient media’s meaning. First, in the stage of textual analysis, the embedding attributes upon female mannequins imply that the models are denoted as the representation of modern women, which are identical with the identities of their social milieus. The communication signs aimed to be constructed are the women who lose their subjectivities and ‘feel embarrassed’ to flaunt their faces to the public because of pimples on their faces. Second, in the stage of analysis of discourse practice, it points out that ambient media as communication media has been comprehensively responded by the targeted audiences. Ambient media has a role as an actor because of its eyes-catching setting, and taking space over the area where the public are wandering around. Indeed, when the public realize that the ambient media models are motionless -unlike human- stronger relation then appears, marked by several responses from targeted audiences. Third, in the stage of analysis of social practice, soap operas and celebrity gossip shows on the television become a dominant discourse influencing advertisement meaning. The subjectivity of Miracle Advertisement corners women by the absence of women participation in public space, the representation of women in isolation, and the portrayal of women as an anxious person in the social rank when their faces suffered from pimples. The Ambient media as the advertisement campaign of Miracle is quite success in constructing a new trend discourse of face beauty that is not limited on benchmarks of common beauty virtues, but the idea of beauty can be presented by ‘when woman doesn’t look good’ visualization.Keywords: ambient media, advertisement, subjectivity, power
Procedia PDF Downloads 32474 An Econometric Analysis of the Flat Tax Revolution
Authors: Wayne Tarrant, Ethan Petersen
Abstract:
The concept of a flat tax goes back to at least the Biblical tithe. A progressive income tax was first vociferously espoused in a small, but famous, pamphlet in 1848 (although England had an emergency progressive tax for war costs prior to this). Within a few years many countries had adopted the progressive structure. The flat tax was only reinstated in some small countries and British protectorates until Mart Laar was elected Prime Minister of Estonia in 1992. Since Estonia’s adoption of the flat tax in 1993, many other formerly Communist countries have likewise abandoned progressive income taxes. Economists had expectations of what would happen when a flat tax was enacted, but very little work has been done on actually measuring the effect. With a testbed of 21 countries in this region that currently have a flat tax, much comparison is possible. Several countries have retained progressive taxes, giving an opportunity for contrast. There are also the cases of Czech Republic and Slovakia, which have adopted and later abandoned the flat tax. Further, with over 20 years’ worth of economic history in some flat tax countries, we can begin to do some serious longitudinal study. In this paper we consider many economic variables to determine if there are statistically significant differences from before to after the adoption of a flat tax. We consider unemployment rates, tax receipts, GDP growth, Gini coefficients, and market data where the data are available. Comparisons are made through the use of event studies and time series methods. The results are mixed, but we draw statistically significant conclusions about some effects. We also look at the different implementations of the flat tax. In some countries there are equal income and corporate tax rates. In others the income tax has a lower rate, while in others the reverse is true. Each of these sends a clear message to individuals and corporations. The policy makers surely have a desired effect in mind. We group countries with similar policies, try to determine if the intended effect actually occurred, and then report the results. This is a work in progress, and we welcome the suggestion of variables to consider. Further, some of the data from before the fall of the Iron Curtain are suspect. Since there are new ruling regimes in these countries, the methods of computing different statistical measures has changed. Although we first look at the raw data as reported, we also attempt to account for these changes. We show which data seem to be fictional and suggest ways to infer the needed statistics from other data. These results are reported beside those on the reported data. Since there is debate about taxation structure, this paper can help inform policymakers of change the flat tax has caused in other countries. The work shows some strengths and weaknesses of a flat tax structure. Moreover, it provides beginnings of a scientific analysis of the flat tax in practice rather than having discussion based solely upon theory and conjecture.Keywords: flat tax, financial markets, GDP, unemployment rate, Gini coefficient
Procedia PDF Downloads 34173 Combat Plastic Entering in Kanpur City, Uttar Pradesh, India Marine Environment
Authors: Arvind Kumar
Abstract:
The city of Kanpur is located in the terrestrial plain area on the bank of the river Ganges and is the second largest city in the state of Uttar Pradesh. The city generates approximately 1400-1600 tons per day of MSW. Kanpur has been known as a major point and non-points-based pollution hotspot for the river Ganges. The city has a major industrial hub, probably the largest in the state, catering to the manufacturing and recycling of plastic and other dry waste streams. There are 4 to 5 major drains flowing across the city, which receive a significant quantity of waste leakage, which subsequently adds to the Ganges flow and is carried to the Bay of Bengal. A river-to-sea flow approach has been established to account for leaked waste into urban drains, leading to the build-up of marine litter. Throughout its journey, the river accumulates plastic – macro, meso, and micro, from various sources and transports it towards the sea. The Ganges network forms the second-largest plastic-polluting catchment in the world, with over 0.12 million tonnes of plastic discharged into marine ecosystems per year and is among 14 continental rivers into which over a quarter of global waste is discarded 3.150 Kilo tons of plastic waste is generated in Kanpur, out of which 10%-13% of plastic is leaked into the local drains and water flow systems. With the Support of Kanpur Municipal Corporation, 1TPD capacity MRF for drain waste management was established at Krishna Nagar, Kanpur & A German startup- Plastic Fisher, was identified for providing a solution to capture the drain waste and achieve its recycling in a sustainable manner with a circular economy approach. The team at Plastic Fisher conducted joint surveys and identified locations on 3 drains at Kanpur using GIS maps developed during the survey. It suggested putting floating 'Boom Barriers' across the drains with a low-cost material, which reduced their cost to only 2000 INR per barrier. The project was built upon the self-sustaining financial model. The project includes activities where a cost-efficient model is developed and adopted for a socially self-inclusive model. The project has recommended the use of low-cost floating boom barriers for capturing waste from drains. This involves a one-time time cost and has no operational cost. Manpower is engaged in fishing and capturing immobilized waste, whose salaries are paid by the Plastic Fisher. The captured material is sun-dried and transported to the designated place, where the shed and power connection, which act as MRF, are provided by the city Municipal corporation. Material aggregation, baling, and transportation costs to end-users are borne by Plastic Fisher as well.Keywords: Kanpur, marine environment, drain waste management, plastic fisher
Procedia PDF Downloads 7172 Substitutional Inference in Poetry: Word Choice Substitutions Craft Multiple Meanings by Inference
Authors: J. Marie Hicks
Abstract:
The art of the poetic conjoins meaning and symbolism with imagery and rhythm. Perhaps the reader might read this opening sentence as 'The art of the poetic combines meaning and symbolism with imagery and rhythm,' which holds a similar message, but is not quite the same. The reader understands that these factors are combined in this literary form, but to gain a sense of the conjoining of these factors, the reader is forced to consider that these aspects of poetry are not simply combined, but actually adjoin, abut, skirt, or touch in the poetic form. This alternative word choice is an example of substitutional inference. Poetry is, ostensibly, a literary form where language is used precisely or creatively to evoke specific images or emotions for the reader. Often, the reader can predict a coming rhyme or descriptive word choice in a poem, based on previous rhyming pattern or earlier imagery in the poem. However, there are instances when the poet uses an unexpected word choice to create multiple meanings and connections. In these cases, the reader is presented with an unusual phrase or image, requiring that they think about what that image is meant to suggest, and their mind also suggests the word they expected, creating a second, overlying image or meaning. This is what is meant by the term 'substitutional inference.' This is different than simply using a double entendre, a word or phrase that has two meanings, often one complementary and the other disparaging, or one that is innocuous and the other suggestive. In substitutional inference, the poet utilizes an unanticipated word that is either visually or phonetically similar to the expected word, provoking the reader to work to understand the poetic phrase as written, while unconsciously incorporating the meaning of the line as anticipated. In other words, by virtue of a word substitution, an inference of the logical word choice is imparted to the reader, while they are seeking to rationalize the word that was actually used. There is a substitutional inference of meaning created by the alternate word choice. For example, Louise Bogan, 4th Poet Laureate of the United States, used substitutional inference in the form of homonyms, malapropisms, and other unusual word choices in a number of her poems, lending depth and greater complexity, while actively engaging her readers intellectually with her poetry. Substitutional inference not only adds complexity to the potential interpretations of Bogan’s poetry, as well as the poetry of others, but provided a method for writers to infuse additional meanings into their work, thus expressing more information in a compact format. Additionally, this nuancing enriches the poetic experience for the reader, who can enjoy the poem superficially as written, or on a deeper level exploring gradations of meaning.Keywords: poetic inference, poetic word play, substitutional inference, word substitution
Procedia PDF Downloads 23871 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception
Authors: Păcurar Diana Istina
Abstract:
The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception
Procedia PDF Downloads 9170 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays
Authors: Min Han, Di Wu, Lin Yuan, Fei Liu
Abstract:
Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance
Procedia PDF Downloads 27569 The Strategic Gas Aggregator: A Key Legal Intervention in an Evolving Nigerian Natural Gas Sector
Authors: Olanrewaju Aladeitan, Obiageli Phina Anaghara-Uzor
Abstract:
Despite the abundance of natural gas deposits in Nigeria and the immense potential, this presents both for the domestic and export oriented revenue, there exists an imbalance in the preference for export as against the development and optimal utilization of natural gas for the domestic industry. Considerable amounts of gas are still being wasted by flaring in the country to this day. Although the government has set in place initiatives to harness gas at the flare and thereby reduce volumes flared, the gas producers would rather direct the gas produced to the export market whereas gas apportioned to the domestic market is often marred by the low domestic gas price which is often discouraging to the gas producers. The exported fraction of gas production no doubt yields healthy revenues for the government and an encouraging return on investment for the gas producers and for this reason export sales remain enticing and preferable to the domestic sale of gas. This export pull impacts negatively if left unchecked, on the domestic market which is in no position to match the price at the international markets. The issue of gas price remains critical to the optimal development of the domestic gas industry, in that it comprises the basis for investment decisions of the producers on the allocation of their scarce resources and to what project to channel their output in order to maximize profit. In order then to rebalance the domestic industry and streamline the market for gas, the Gas Aggregation Company of Nigeria, also known as the Strategic Aggregator was proposed under the Nigerian Gas Master Plan of 2008 and then established pursuant to the National Gas Supply and Pricing Regulations of 2008 to implement the domestic gas supply obligation which focuses on ramping-up gas volumes for domestic utilization by mandatorily requiring each gas producer to dedicate a portion of its gas production for domestic utilization before having recourse to the export market. The 2008 Regulations further stipulate penalties in the event of non-compliance. This study, in the main, assesses the adequacy of the legal framework for the Nigerian Gas Industry, given that the operational laws are structured more for oil than its gas counterpart; examine the legal basis for the Strategic Aggregator in the light of the Domestic Gas Supply and Pricing Policy 2008 and the National Domestic Gas Supply and Pricing Regulations 2008 and makes a case for a review of the pivotal role of the Aggregator in the Nigerian Gas market. In undertaking this assessment, the doctrinal research methodology was adopted. Findings from research conducted reveal the reawakening of the Federal Government to the immense potential of its gas industry as a critical sector of its economy and the need for a sustainable domestic natural gas market. A case for the review of the ownership structure of the Aggregator to comprise a balanced mix of the Federal Government, gas producers and other key stakeholders in order to ensure the effective implementation of the domestic supply obligations becomes all the more imperative.Keywords: domestic supply obligations, natural gas, Nigerian gas sector, strategic gas aggregator
Procedia PDF Downloads 23068 Evaluating Urban City Indices: A Study for Investigating Functional Domains, Indicators and Integration Methods
Authors: Fatih Gundogan, Fatih Kafali, Abdullah Karadag, Alper Baloglu, Ersoy Pehlivan, Mustafa Eruyar, Osman Bayram, Orhan Karademiroglu, Wasim Shoman
Abstract:
Nowadays many cities around the world are investing their efforts and resources for the purpose of facilitating their citizen’s life and making cities more livable and sustainable by implementing newly emerged phenomena of smart city. For this purpose, related research institutions prepare and publish smart city indices or benchmarking reports aiming to measure the city’s current ‘smartness’ status. Several functional domains, various indicators along different selection and calculation methods are found within such indices and reports. The selection criteria varied for each institution resulting in inconsistency in the ranking and evaluating. This research aims to evaluate the impact of selecting such functional domains, indicators and calculation methods which may cause change in the rank. For that, six functional domains, i.e. Environment, Mobility, Economy, People, Living and governance, were selected covering 19 focus areas and 41 sub-focus (variable) areas. 60 out of 191 indicators were also selected according to several criteria. These were identified as a result of extensive literature review for 13 well known global indices and research and the ISO 37120 standards of sustainable development of communities. The values of the identified indicators were obtained from reliable sources for ten cities. The values of each indicator for the selected cities were normalized and standardized to objectively investigate the impact of the chosen indicators. Moreover, the effect of choosing an integration method to represent the values of indicators for each city is investigated by comparing the results of two of the most used methods i.e. geometric aggregation and fuzzy logic. The essence of these methods is assigning a weight to each indicator its relative significance. However, both methods resulted in different weights for the same indicator. As a result of this study, the alternation in city ranking resulting from each method was investigated and discussed separately. Generally, each method illustrated different ranking for the selected cities. However, it was observed that within certain functional areas the rank remained unchanged in both integration method. Based on the results of the study, it is recommended utilizing a common platform and method to objectively evaluate cities around the world. The common method should provide policymakers proper tools to evaluate their decisions and investments relative to other cities. Moreover, for smart cities indices, at least 481 different indicators were found, which is an immense number of indicators to be considered, especially for a smart city index. Further works should be devoted to finding mutual indicators representing the index purpose globally and objectively.Keywords: functional domain, urban city index, indicator, smart city
Procedia PDF Downloads 14967 The Neuropsychology of Obsessive Compulsion Disorder
Authors: Mia Bahar, Özlem Bozkurt
Abstract:
Obsessive-compulsive disorder (OCD) is a typical, persistent, and long-lasting mental health condition in which a person experiences uncontrollable, recurrent thoughts (or "obsessions") and/or activities (or "compulsions") that they feel compelled to engage in repeatedly. Obsessive-compulsive disorder is both underdiagnosed and undertreated. It frequently manifests in a variety of medical settings and is persistent, expensive, and burdensome. Obsessive-compulsive neurosis was long believed to be a condition that offered valuable insight into the inner workings of the unconscious mind. Obsessive-compulsive disorder is now recognized as a prime example of a neuropsychiatric condition susceptible to particular pharmacotherapeutic and psychotherapy therapies and mediated by pathology in particular neural circuits. An obsessive-compulsive disorder which is called OCD, usually has two components, one cognitive and the other behavioral, although either can occur alone. Obsessions are often repetitive and intrusive thoughts that invade consciousness. These obsessions are incredibly hard to control or dismiss. People who have OCD often engage in rituals to reduce anxiety associated with intrusive thoughts. Once the ritual is formed, the person may feel extreme relief and be free from anxiety until the thoughts of contamination intrude once again. These thoughts are strengthened through a manifestation of negative reinforcement because they allow the person to avoid anxiety and obscurity. These thoughts are described as autogenous, meaning they most likely come from nowhere. These unwelcome thoughts are related to actions which we can describe as Thought Action Fusion. The thought becomes equated with an action, such as if they refuse to perform the ritual, something bad might happen, and so people perform the ritual to escape the intrusive thought. In almost all cases of OCD, the person's life gets extremely disturbed by compulsions and obsessions. Studies show OCD is an estimated 1.1% prevalence, making it a challenging issue with high co-morbidities with other issues like depressive episodes, panic disorders, and specific phobias. The first to reveal brain anomalies in OCD were numerous CT investigations, although the results were inconsistent. A few studies have focused on the orbitofrontal cortex (OFC), anterior cingulate gyrus (AC), and thalamus, structures also implicated in the pathophysiology of OCD by functional neuroimaging studies, but few have found consistent results. However, some studies have found abnormalities in the basal ganglion. There have also been some discussions that OCD might be genetic. OCD has been linked to families in studies of family aggregation, and findings from twin studies show that this relationship is somewhat influenced by genetic variables. Some Research has shown that OCD is a heritable, polygenic condition that can result from de novo harmful mutations as well as common and unusual variants. Numerous studies have also presented solid evidence in favor of a significant additive genetic component to OCD risk, with distinct OCD symptom dimensions showing both common and individual genetic risks.Keywords: compulsions, obsessions, neuropsychiatric, genetic
Procedia PDF Downloads 6566 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 23465 Integrated Approach to Attenuate Insulin Amyloidosis: Synergistic Effects of Peptide and Cysteine Protease Enzymes
Authors: Shilpa Mukundaraj, Nagaraju Shivaiah
Abstract:
Amyloidogenic conditions, driven by protein aggregation into insoluble fibrils, which pose significant challenges in the clinical condition of diabetes management, particularly through the amyloidogenic LVEALYL sequence in insulin B-chain. This study explores a dual therapeutic strategy involving cysteine protease enzymes such as papain and ficin and inhibitory peptides to target insulin amyloidosis. Combining in silico, in vitro, and in vivo methodologies, the research aims to inhibit amyloid formation and degrade preformed fibrils. Inhibitory peptides were designed using structure-guided approaches in Rosetta to specifically target the LVEALYL sequence. Concurrently, cysteine protease enzymes, including papain and ficin, were evaluated for their fibril disassembly potential. In vitro experiments, utilizing SDS- PAGE and spectroscopic techniques, confirmed dose-dependent degradation 50 to 300ug in vitro and 60mg/kg in vivo of amyloid aggregates by these enzymes, with significant disaggregation observed at higher concentrations 20mg. Peptide inhibitors effectively reduced fibril formation, as evidenced by reduced Thioflavin T fluorescence and circular dichroism spectroscopy. Complementary in silico analyses, including molecular docking and dynamic simulations, provided structural insights into enzyme binding interactions with amyloidogenic regions. Key residues involved in substrate recognition and cleavage were identified, with computational findings aligning strongly with experimental data. These insights confirmed the specificity of papain and ficin in targeting insulin fibrils. For translational potential, an in vivo rat model was developed, involving subcutaneous insulin amyloid injections to induce localized amyloid deposits. Over six days of enzyme treatment, a marked reduction in amyloid burden was observed through histological findings and biochemical assay superoxide dismutase can provide insights into oxidative damage due to amyloid deposition. Furthermore, inflammatory markers IL-6, TNFα were significantly attenuated in treated groups, emphasizing the dual role of enzymes in amyloid clearance and inflammation modulation. This integrative study highlights the promise of cysteine protease enzymes and inhibitory peptides as complementary therapeutic strategies for managing insulin amyloidosis. By targeting both the formation and persistence of amyloid fibrils, this dual approach offers a novel and effective avenue for amyloidosis treatment.Keywords: insulin amyloidosis, peptide inhibitors, cysteine protease enzymes, amyloid degradation
Procedia PDF Downloads 664 ATR-IR Study of the Mechanism of Aluminum Chloride Induced Alzheimer Disease - Curative and Protective Effect of Lepidium sativum Water Extract on Hippocampus Rats Brain Tissue
Authors: Maha J. Balgoon, Gehan A. Raouf, Safaa Y. Qusti, Soad S. Ali
Abstract:
The main cause of Alzheimer disease (AD) was believed to be mainly due to the accumulation of free radicals owing to oxidative stress (OS) in brain tissue. The mechanism of the neurotoxicity of Aluminum chloride (AlCl3) induced AD in hippocampus Albino wister rat brain tissue, the curative & the protective effects of Lipidium sativum group (LS) water extract were assessed after 8 weeks by attenuated total reflection spectroscopy ATR-IR and histologically by light microscope. ATR-IR results revealed that the membrane phospholipid undergo free radical attacks, mediated by AlCl3, primary affects the polyunsaturated fatty acids indicated by the increased of the olefinic -C=CH sub-band area around 3012 cm-1 from the curve fitting analysis. The narrowing in the half band width(HBW) of the sνCH2 sub-band around 2852 cm-1 due to Al intoxication indicates the presence of trans form fatty acids rather than gauch rotomer. The degradation of hydrocarbon chain to shorter chain length, increasing in membrane fluidity, disorder and decreasing in lipid polarity in AlCl3 group were indicated by the detected changes in certain calculated area ratios compared to the control. Administration of LS was greatly improved these parameters compared to the AlCl3 group. Al influences the Aβ aggregation and plaque formation, which in turn interferes to and disrupts the membrane structure. The results also showed a marked increase in the β-parallel and antiparallel structure, that characterize the Aβ formation in Al-induced AD hippocampal brain tissue, indicated by the detected increase in both amide I sub-bands around 1674, 1692 cm-1. This drastic increase in Aβ formation was greatly reduced in the curative and protective groups compared to the AlCl3 group and approaches nearly the control values. These results were supported too by the light microscope. AlCl3 group showed significant marked degenerative changes in hippocampal neurons. Most cells appeared small, shrieked and deformed. Interestingly, the administration of LS in curative and protective groups markedly decreases the amount of degenerated cells compared to the non-treated group. Also the intensity of congo red stained cells was decreased. Hippocampal neurons looked more/or less similar to those of control. This study showed a promising therapeutic effect of Lipidium sativum group (LS) on AD rat model that seriously overcome the signs of oxidative stress on membrane lipid and restore the protein misfolding.Keywords: aluminum chloride, alzheimer disease, ATR-IR, Lipidium sativum
Procedia PDF Downloads 36763 Experiencing an Unknown City: Environmental Features as Pedestrian Wayfinding Clues through the City of Swansea, UK
Authors: Hussah Alotaishan
Abstract:
In today’s globally-driven modern cities diverse groups of new visitors face various challenges when attempting to find their desired location if culture and language are barriers. The most common way-showing tools such as directional and identificational signs are the most problematic and their usefulness can be limited or even non-existent. It is argued new methods should be implemented that could support or replace such conventional literacy and language dependent way-finding aids. It has been concluded in recent research studies that local urban features in complex pedestrian spaces are worthy of further study in order to reveal if they do function as way-showing clues. Some researchers propose a more comprehensive approach to the complex perception of buildings, façade design and surface patterns, while some have been questioning whether we necessarily need directional signs or can other methods deliver the same message but in a clearer manner for a wider range of users. This study aimed to test to what extent do existent environmental and urban features through the city center area of Swansea in the UK facilitate the way-finding process of a first time visitor. The three-hour experiment was set to attempt to find 11 visitor attractions ranging from recreational, historical, educational and religious locations. The challenge was attempting to find as many as possible when no prior geographical knowledge of their whereabouts was established. The only clues were 11 pictures representing each of the locations that had been acquired from the city of Swansea official website. An iPhone and a heart-rate tracker wristwatch were used to record the route was taken and stress levels, and take record photographs of destinations or decision-making points throughout the journey. This paper addresses: current limitations in understanding the ways that the physical environment can be intentionally deployed to facilitate pedestrians while finding their way around, without or with a reduction in language dependent signage; investigates visitor perceptions of their surroundings by indicating what urban elements manifested an impact on the way-finding process. The initial findings support the view that building facades and street features, such as width, could facilitate the decision-making process if strategically employed. However, more importantly, the anticipated features of a specific place construed from a promotional picture can also be misleading and create confusion that may lead to getting lost.Keywords: pedestrian way-finding, environmental features, urban way-showing, environmental affordance
Procedia PDF Downloads 17462 Fructose-Aided Cross-Linked Enzyme Aggregates of Laccase: An Insight on Its Chemical and Physical Properties
Authors: Bipasa Dey, Varsha Panwar, Tanmay Dutta
Abstract:
Laccase, a multicopper oxidase (EC 1.10.3.2) have been at the forefront as a superior industrial biocatalyst. They are versatile in terms of bestowing sustainable and ecological catalytic reactions such as polymerisation, xenobiotic degradation and bioremediation of phenolic and non-phenolic compounds. Regardless of the wide biotechnological applications, the critical limiting factors viz. reusability, retrieval, and storage stability still prevail. This can cause an impediment in their applicability. Crosslinked enzyme aggregates (CLEAs) have emerged as a promising technique that rehabilitates these essential facets, albeit at the expense of their enzymatic activity. The carrier free crosslinking method prevails over the carrier-bound immobilisation in conferring high productivity, low production cost owing to the absence of additional carrier and circumvent any non-catalytic ballast which could dilute the volumetric activity. To the best of our knowledge, the ε-amino group of lysyl residue is speculated as the best choice for forming Schiff’s base with glutaraldehyde. Despite being most preferrable, excess glutaraldehyde can bring about disproportionate and undesirable crosslinking within the catalytic site and hence could deliver undesirable catalytic losses. Moreover, the surface distribution of lysine residues in Trametes versicolor laccase is significantly less. Thus, to mitigate the adverse effect of glutaraldehyde in conjunction with scaling down the degradation or catalytic loss of the enzyme, crosslinking with inert substances like gelatine, collagen, Bovine serum albumin (BSA) or excess lysine is practiced. Analogous to these molecules, sugars have been well known as a protein stabiliser. It helps to retain the structural integrity, specifically secondary structure of the protein during aggregation by changing the solvent properties. They are comprehended to avert protein denaturation or enzyme deactivation during precipitation. We prepared crosslinked enzyme aggregates (CLEAs) of laccase from T. versicolor with the aid of sugars. The sugar CLEAs were compared with the classic BSA and glutaraldehyde laccase CLEAs concerning physico-chemical properties. The activity recovery for the fructose CLEAs were found to be ~20% higher than the non-sugar CLEA. Moreover, the 𝐾𝑐𝑎𝑡𝐾𝑚⁄ values of the CLEAs were two and three-fold higher than BSA-CLEA and GACLEA, respectively. The half-life (t1/2) deciphered by sugar-CLEA was higher than the t1/2 of GA-CLEAs and free enzyme, portraying more thermal stability. Besides, it demonstrated extraordinarily high pH stability, which was analogous to BSA-CLEA. The promising attributes of increased storage stability and recyclability (>80%) gives more edge to the sugar-CLEAs over conventional CLEAs of their corresponding free enzyme. Thus, sugar-CLEA prevails in furnishing the rudimentary properties required for a biocatalyst and holds many prospects.Keywords: cross-linked enzyme aggregates, laccase immobilization, enzyme reusability, enzyme stability
Procedia PDF Downloads 10361 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression
Authors: Anne M. Denton, Rahul Gomes, David W. Franzen
Abstract:
High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression
Procedia PDF Downloads 12960 Media Coverage on Child Sexual Abuse in Developing Countries
Authors: Hayam Qayyum
Abstract:
Print and Broadcast media are considered to be the most powerful social change agents and effective medium that can revolutionize the deter society into the civilized, responsible, composed society. Beside all major roles, imperative role of media is to highlight the human rights’ violation issues in order to provide awareness and to prevent society from the social evils and injustice. So, by pointing out the odds, media can lessen the magnitude of happenings within the society. For centuries, the “Silent Crime” i.e. Child Sexual Abuse (CSA) is gulping down the developing countries. This study will explore that how the appropriate Print and Broadcast media coverage can eliminate Child Sexual Abuse from the society. The immense challenge faced by the journalists today; is the accurate and ethical reporting and appropriate coverage to disclose the facts and deliver right message on the right time to lessen the social evils in the developing countries, by not harming the prestige of the victim. In case of CSA most of the victims and their families are not in favour to expose their children to media due to family norms and respect in the society. Media should focus on in depth information of CSA and use this coverage is to draw attention of the concern authorities to look into the matter for reforms and reviews in the system. Moreover, media as a change agent can bring such issue into the knowledge of the international community to make collective efforts with the affected country to eliminate the ‘Silent Crime’ from the society. The model country selected for this research paper is South Africa. The purpose of this research is not only to examine the existing reporting patterns and content of print and broadcast media coverage of South Africa but also aims to create awareness to eliminate Child Sexual abuse and indirectly to improve the condition of stake holders to overcome this social evil. The literature review method is used to formulate this paper. Trends of media content on CSA will be identified that how much amount and nature of information made available to the public through the media General view of media coverage on child sexual abuse in developing countries like India and Pakistan will also be focused. This research will be limited to the role of print and broadcast media coverage to eliminate child sexual abuse in South Africa. In developing countries, CSA issue needs to be addressed on immediate basis. The study will explore the CSA content of the most influential broadcast and print media outlets of South Africa. Broadcast media will be comprised of TV channels and print media will be comprised of influential newspapers. South Africa is selected as a model for this research paper.Keywords: child sexual abuse, developing countries, print and broadcast media, South Africa
Procedia PDF Downloads 58159 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 27958 Aesthetics and Semiotics in Theatre Performance
Authors: Păcurar Diana Istina
Abstract:
Structured in three chapters, the article attempts an X-ray of the theatrical aesthetics, correctly understood through the emotions generated in the intimate structure of the spectator that precedes the triggering of the viewer’s perception and not through the superposition, unfortunately common, of the notion of aesthetics with the style in which a theater show is built. The first chapter contains a brief history of the appearance of the word aesthetic, the formulation of definitions for this new term, as well as its connections with the notions of semiotics, in particular with the perception of the message transmitted. Starting with Aristotle and Plato, and reaching Magritte, their interventions should not be interpreted in the sense that the two scientific concepts can merge into one discipline. The perception that is the object of everyone’s analysis, the understanding of meaning, the decoding of the messages sent, and the triggering of feelings that culminate in pleasure, shaping the aesthetic vision, are some elements that keep semiotics and aesthetics distinct, even though they share many methods of analysis. The compositional processes of aesthetic representation and symbolic formation are analyzed in the second part of the paper from perspectives that include or do not include historical, cultural, social, and political processes. Aesthetics and the organization of its symbolic process are treated, taking into account expressive activity. The last part of the article explores the notion of aesthetics in applied theater, more specifically in the theater show. Taking the postmodern approach that aesthetics applies to the creation of an artifact and the reception of that artifact, the intervention of these elements in the theatrical system must be emphasized –that is, the analysis of the problems arising in the stages of the creation, presentation, and reception, by the public, of the theater performance. The aesthetic process is triggered involuntarily, simultaneously, or before the moment when people perceive the meaning of the messages transmitted by the work of art. The finding of this fact makes the mental process of aesthetics similar or related to that of semiotics. No matter how perceived individually, beauty, the mechanism of production can be reduced to two. The first step presents similarities to Peirce’s model, but the process between signified and signified additionally stimulates the related memory of the evaluation of beauty, adding to the meanings related to the signification itself. Then, the second step, a process of comparison, is followed, in which one examines whether the object being looked at matches the accumulated memory of beauty. Therefore, even though aesthetics is derived from the conceptual part, the judgment of beauty and, more than that, moral judgment come to be so important to the social activities of human beings that it evolves as a visible process independent of other conceptual contents.Keywords: aesthetics, semiotics, symbolic composition, subjective joints, signifying, signified
Procedia PDF Downloads 11057 Diversity and Use of Agroforestry Yards of Family Farmers of Ponte Alta – Gama, Federal District, Brazil
Authors: Kever Bruno Paradelo Gomes, Rosana Carvalho Martins
Abstract:
The home gardens areas are production systems, which are located near the homes and are quite common in the tropics. They consist of agricultural and forest species and may also involve the raising of small animals to produce food for subsistence as well as income generation, with a special focus on the conservation of biodiversity. Home gardens are diverse Agroforestry systems with multiple uses, among many, food security, income aid, traditional medicine. The work was carried out on rural properties of the family farmers of the Ponte Alta Rural Nucleus, Gama Administrative Region, in the city of Brasília, Federal District- Brazil. The present research is characterized methodologically as a quantitative, exploratory and descriptive nature. The instruments used in this research were: bibliographic survey and semi-structured questionnaire. The data collection was performed through the application of a semi-structured questionnaire, containing questions that referred to the perception and behavior of the interviewed producer on the subject under analysis. In each question, the respondent explained his knowledge about sustainability, agroecological practices, environmental legislation, conservation methods, forest and medicinal species, ago social and socioeconomic characteristics, use and purpose of agroforestry and technical assistance. The sample represented 55.62% of the universe of the study. We interviewed 99 people aged 18-83 years, with a mean age of 49 years. The low level of education, coupled with the lack of training and guidance for small family farmers in the Ponte Alta Rural Nucleus, is one of the limitations to the development of practices oriented towards sustainable and agroecological agriculture in the nucleus. It is observed that 50.5% of the interviewed people landed with agroforestry yards less than 20 years ago, and only 16.17% of them are older than 35 years. In identifying agriculture as the main activity of most of the rural properties studied, attention is drawn to the cultivation of medicinal plants, fruits and crops as the most extracted products. However, it is verified that the crops in the backyards have the exclusive purpose of family consumption, which could be complemented with the marketing of the surplus, as well as with the aggregation of value to the cultivated products. Initiatives such as this may contribute to the increase in family income and to the motivation and value of the crop in agroecological gardens. We conclude that home gardens of Ponte Alta are highly diverse thus contributing to local biodiversity conservation of are managed by women to ensure food security and allows income generation. The tradition of existing knowledge on the use and management of the diversity of resources used in agroforestry yards is of paramount importance for the development of sustainable alternative practices.Keywords: agriculture, agroforestry system, rural development, sustainability
Procedia PDF Downloads 14156 Influence of Kneading Conditions on the Textural Properties of Alumina Catalysts Supports for Hydrotreating
Authors: Lucie Speyer, Vincent Lecocq, Séverine Humbert, Antoine Hugon
Abstract:
Mesoporous alumina is commonly used as a catalyst support for the hydrotreating of heavy petroleum cuts. The process of fabrication usually involves: the synthesis of the boehmite AlOOH precursor, a kneading-extrusion step, and a calcination in order to obtain the final alumina extrudates. Alumina is described as a complex porous medium, generally agglomerates constituted of aggregated nanocrystallites. Its porous texture directly influences the active phase deposition and mass transfer, and the catalytic properties. Then, it is easy to figure out that each step of the fabrication of the supports has a role on the building of their porous network, and has to be well understood to optimize the process. The synthesis of boehmite by precipitation of aluminum salts was extensively studied in the literature and the effect of various parameters, such as temperature or pH, are known to influence the size and shape of the crystallites and the specific surface area of the support. The calcination step, through the topotactic transition from boehmite to alumina, determines the final properties of the support and can tune the surface area, pore volume and pore diameters from those of boehmite. However, the kneading extrusion step has been subject to a very few studies. It generally consists in two steps: an acid, then a basic kneading, where the boehmite powder is introduced in a mixer and successively added with an acid and a base solution to form an extrudable paste. During the acid kneading, the induced positive charges on the hydroxyl surface groups of boehmite create an electrostatic repulsion which tends to separate the aggregates and even, following the conditions, the crystallites. The basic kneading, by reducing the surface charges, leads to a flocculation phenomenon and can control the reforming of the overall structure. The separation and reassembling of the particles constituting the boehmite paste have a quite obvious influence on the textural properties of the material. In this work, we are focused on the influence of the kneading step on the alumina catalysts supports. Starting from an industrial boehmite, extrudates are prepared using various kneading conditions. The samples are studied by nitrogen physisorption in order to analyze the evolution of the textural properties, and by synchrotron small-angle X-ray scattering (SAXS), a more original method which brings information about agglomeration and aggregation of the samples. The coupling of physisorption and SAXS enables a precise description of the samples, as same as an accurate monitoring of their evolution as a function of the kneading conditions. These ones are found to have a strong influence of the pore volume and pore size distribution of the supports. A mechanism of evolution of the texture during the kneading step is proposed and could be attractive in order to optimize the texture of the supports and then, their catalytic performances.Keywords: alumina catalyst support, kneading, nitrogen physisorption, small-angle X-ray scattering
Procedia PDF Downloads 25455 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow
Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather
Abstract:
The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.Keywords: agglomeration, channel flow, DEM, LES, turbulence
Procedia PDF Downloads 31854 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 8953 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 13052 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method
Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov
Abstract:
The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection
Procedia PDF Downloads 21651 Story Telling Method as a Bastion of Local Wisdom in the Frame of Education Technology Development in Medan, North Sumatra-Indonesia
Authors: Mardianto
Abstract:
Education and learning are now grown rapidly. Synergy of techonology especially instructional technology in the learning activities are very big influence on the effectiveness of learning and creativity to achieve optimal results. But on the other hand there is a education value that is difficult to be articulated through character-forming technology such as honesty, discipline, hard work, heroism, and so forth. Learning strategy and storytelling from the past until today is still an option for teachers to convey the message of character values. With the material was loaded from the local culture (stories folklore), the combination of learning objectives (build character child) strategy, and traditional methods (storytelling and story), and the preservation of local culture (dig tale folklore) is critical to maintaining the nation's culture. In the context of maintaining the nation's culture, then since the age of the child at the level of government elementary school a necessity. Globalization, the internet and technology sometimes feel can displace the role of the teacher in the learning activities. To the oral tradition is a mainstay of storytelling should be maintained and preserved. This research was conducted at the elementary school in the city of Medan, North Sumatra Indonesia, with a random sampling technique, the 27 class teachers were respondents who were randomly assigned to the Madrasah Ibtdaiyah (Islamic Elementary School) both public and private. Research conducted at the beginning of 2014 refers to a curriculum that is being transformed in the environment ministry Republic Religion Indonesia. The results of this study indicate that; the declining skills of teachers to develop storytelling this can be seen from; 74.07% of teachers have never attended a special training storytelling, 85.19% no longer nasakah new stories, only 22.22% are teachers who incorporate methods of stories in the learning plan. Most teachers are no longer concerned with storytelling, among those experiencing difficulty in developing methods because the story; 66.67% of children are more interested in children's cartoons like Bobo boy, Angrybirds and others, 59.26 children prefer other activities than listening to a story. The teachers hope, folklore books should be preserved, storytelling training should be provided by the government through the ministry of religion, race or competition of storytelling should be scheduled, writing a new script-based populist storytelling should be provided immediately. The teachers’ hope certainly not excessive, by realizing the story method becomes articulation as the efforts of child character development based populist, therefore the local knowledge can be a strong fortress facing society in the era of progress as at present, and future.Keywords: story telling, local wisdom, education, technology development
Procedia PDF Downloads 279