Search results for: system dynamic model
558 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids
Authors: Laith F. Gulli, Nicole M. Mallory
Abstract:
The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.Keywords: psychometric, parametric, domains, grids, therapists
Procedia PDF Downloads 278557 PARP1 Links Transcription of a Subset of RBL2-Dependent Genes with Cell Cycle Progression
Authors: Ewelina Wisnik, Zsolt Regdon, Kinga Chmielewska, Laszlo Virag, Agnieszka Robaszkiewicz
Abstract:
Apart from protecting genome, PARP1 has been documented to regulate many intracellular processes inter alia gene transcription by physically interacting with chromatin bound proteins and by their ADP-ribosylation. Our recent findings indicate that expression of PARP1 decreases during the differentiation of human CD34+ hematopoietic stem cells to monocytes as a consequence of differentiation-associated cell growth arrest and formation of E2F4-RBL2-HDAC1-SWI/SNF repressive complex at the promoter of this gene. Since the RBL2 complexes repress genes in a E2F-dependent manner and are widespread in the genome in G0 arrested cells, we asked (a) if RBL2 directly contributes to defining monocyte phenotype and function by targeting gene promoters and (b) if RBL2 controls gene transcription indirectly by repressing PARP1. For identification of genes controlled by RBL2 and/or PARP1,we used primer libraries for surface receptors and TLR signaling mediators, genes were silenced by siRNA or shRNA, analysis of gene promoter occupation by selected proteins was carried out by ChIP-qPCR, while statistical analysis in GraphPad Prism 5 and STATISTICA, ChIP-Seq data were analysed in Galaxy 2.5.0.0. On the list of 28 genes regulated by RBL2, we identified only four solely repressed by RBL2-E2F4-HDAC1-BRM complex. Surprisingly, 24 out of 28 emerged genes controlled by RBL2 were co-regulated by PARP1 in six different manners. In one mode of RBL2/PARP1 co-operation, represented by MAP2K6 and MAPK3, PARP1 was found to associate with gene promoters upon RBL2 silencing, which was previously shown to restore PARP1 expression in monocytes. PARP1 effect on gene transcription was observed only in the presence of active EP300, which acetylated gene promoters and activated transcription. Further analysis revealed that PARP1 binding to MA2K6 and MAPK3 promoters enabled recruitment of EP300 in monocytes, while in proliferating cancer cell lines, which actively transcribe PARP1, this protein maintained EP300 at the promoters of MA2K6 and MAPK3. Genome-wide analysis revealed a similar distribution of PARP1 and EP300 around transcription start sites and the co-occupancy of some gene promoters by PARP1 and EP300 in cancer cells. Here, we described a new RBL2/PARP1/EP300 axis which controls gene transcription regardless of the cell type. In this model cell, cycle-dependent transcription of PARP1 regulates expression of some genes repressed by RBL2 upon cell cycle limitation. Thus, RBL2 may indirectly regulate transcription of some genes by controlling the expression of EP300-recruiting PARP1. Acknowledgement: This work was financed by Polish National Science Centre grants nr DEC-2013/11/D/NZ2/00033 and DEC-2015/19/N/NZ2/01735. L.V. is funded by the National Research, Development and Innovation Office grants GINOP-2.3.2-15-2016-00020 TUMORDNS, GINOP-2.3.2-15-2016-00048-STAYALIVE and OTKA K112336. AR is supported by Polish Ministry of Science and Higher Education 776/STYP/11/2016.Keywords: retinoblastoma transcriptional co-repressor like 2 (RBL2), poly(ADP-ribose) polymerase 1 (PARP1), E1A binding protein p300 (EP300), monocytes
Procedia PDF Downloads 209556 Literacy Practices in Immigrant Detention Centers: A Conceptual Exploration of Access, Resistance, and Connection
Authors: Mikel W. Cole, Stephanie M. Madison, Adam Henze
Abstract:
Since 2004, the U.S. immigrant detention system has imprisoned more than five million people. President John F. Kennedy famously dubbed this country a “Nation of Immigrants.” Like many of the nation’s imagined ideals, the historical record finds its practices have never lived up to the tenets championed as defining qualities.The United Nations High Commission on Refugees argues the educational needs of people in carceral spaces, especially those in immigrant detention centers, are urgent and supported by human rights guarantees. However, there is a genuine dearth of literacy research in immigrant detention centers, compounded by a general lack of access to these spaces. Denying access to literacy education in detention centers is one way the history of xenophobic immigration policy persists. In this conceptual exploration, first-hand accounts from detained individuals, their families, and the organizations that work with them have been shared with the authors. In this paper, the authors draw on experiences, reflections, and observations from serving as volunteers to develop a conceptual framework for the ways in which literacy practices are enacted in detention centers. Literacy is an essential tool for accessing those detained in immigrant detention centers and a critical tool for those being detained to access legal and other services. One of the most striking things about the detention center is how to behave; gaining access for a visit is neither intuitive nor straightforward. The men experiencing detention are also at a disadvantage. The lack of access to their own documents is a profound barrier to men navigating the complex immigration process. Literacy is much more than a skill for gathering knowledge or accessing carceral spaces; literacy is fundamentally a source of personal empowerment. Frequently men find a way to reclaim their sense of dignity through work on their own terms by exchanging their literacy services for products or credits at the commissary. They write cards and letters for fellow detainees, read mail, and manage the exchange of information between the men and their families. In return, the men who have jobs trade items from the commissary or transfer money to the accounts of the men doing the reading, writing, and drawing. Literacy serves as a form of resistance by providing an outlet for productive work. At its core, literacy is the exchange of ideas between an author and a reader and is a primary source of human connection for individuals in carceral spaces. Father’s Day and Christmas are particularly difficult at detention centers. Men weep when speaking about their children and the overwhelming hopelessness they feel by being separated from them. Yet card-writing campaigns have provided these men with words of encouragement as thousands of hand-written cards make their way to the detention center. There are undoubtedly more literacies being practiced in the immigrant detention center where we work and at other detention centers across the country, and these categories are early conceptions with which we are still wrestling.Keywords: detention centers, education, immigration, literacy
Procedia PDF Downloads 128555 A Preliminary Randomized Controlled Trial of Pure L-Ascorbic Acid with Using a Needle-Free and Micro-Needle Mesotherapy in Treatment of Anti-Aging Procedure
Authors: M. Zasada, A. Markiewicz, A. Erkiert-Polguj, E. Budzisz
Abstract:
The epidermis is a keratinized stratified squamous epithelium covered by the hydro-lipid barrier. Therefore, active substances should be able to penetrate through this hydro-lipid coating. L-ascorbic acid is one of the vitamins which plays an important role in stimulation fibroblast to produce collagen type I and in hyperpigmentation lightening. Vitamin C is a water-soluble antioxidant, which protects skin from oxidation damage and rejuvenates photoaged skin. No-needle mesotherapy is a non-invasive rejuvenation technique depending on electric pulses, electroporation, and ultrasounds. These physicals factors result in deeper penetration of cosmetics. It is important to increase the penetration of L-ascorbic acid, thereby increasing the spectrum of its activity. The aim of the work was to assess the effectiveness of pure L-ascorbic acid activity in anti-aging therapy using a needle-free and micro-needling mesotherapy. The study was performed on a group of 35 healthy volunteers in accordance with the Declaration of Helsinki of 1964 and agreement of the Ethics Commissions no RNN/281/16/KE 2017. Women were randomized to mesotherapy or control group. Control group applied topically 2,5 ml serum containing 20% L-ascorbic acid with hydrate from strawberries, every 10 days for a period of 9 weeks. No-needle mesotherapy, on the left half of the face and micro-needling on the right with the same serum, was done in mesotherapy group. The pH of serum was 3.5-4, and the serum was prepared directly prior to the facial treatment. The skin parameters were measured at the beginning and before each treatment. The measurement of the forehead skin was done using Cutometer® (measurement of skin elasticity and firmness), Corneometer® (skin hydration measurement), Mexameter® (skin tone measurement). Also, the photographs were taken by Fotomedicus system. Additionally, the volunteers fulfilled the questionnaire. Serum was tested for microbiological purity and stability after the opening of the cosmetic. During the study, all of the volunteers were taken care of a dermatologist. The regular application of the serum has caused improvement of the skin parameters. Respectively, after 4 and 8 weeks improvement in hydration and elasticity has been seen (Corneometer®, Cutometer® results). Moreover, the number of hyper-pigmentated spots has decreased (Mexameter®). After 8 weeks the volunteers has claimed that the tested product has smoothing and moisturizing features. Subjective opinions indicted significant improvement of skin color and elasticity. The product containing the L-ascorbic acid used with intercellular penetration promoters demonstrates higher anti-aging efficiency than control. In vivo studies confirmed the effectiveness of serum and the impact of the active substance on skin firmness and elasticity, the degree of hydration and skin tone. Mesotherapy with pure L-ascorbic acid provides better diffusion of active substances through the skin.Keywords: anti-aging, l-ascorbic acid, mesotherapy, promoters
Procedia PDF Downloads 265554 Change of Education Business in the Age of 5G
Authors: Heikki Ruohomaa, Vesa Salminen
Abstract:
Regions are facing huge competition to attract companies, businesses, inhabitants, students, etc. This way to improve living and business environment, which is rapidly changing due to digitalization. On the other hand, from the industry's point of view, the availability of a skilled labor force and an innovative environment are crucial factors. In this context, qualified staff has been seen to utilize the opportunities of digitalization and respond to the needs of future skills. World Manufacturing Forum has stated in the year 2019- report that in next five years, 40% of workers have to change their core competencies. Through digital transformation, new technologies like cloud, mobile, big data, 5G- infrastructure, platform- technology, data- analysis, and social networks with increasing intelligence and automation, enterprises can capitalize on new opportunities and optimize existing operations to achieve significant business improvement. Digitalization will be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, the education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The Fourth Industrial Revolution will bring unprecedented change to societies, education organizations and business environments. This article aims to identify how education, education content, the way education has proceeded, and overall whole the education business is changing. Most important is how we should respond to this inevitable co- evolution. Methodology: The study aims to verify how the learning process is boosted by new digital content, new learning software and tools, and customer-oriented learning environments. The change of education programs and individual education modules can be supported by applied research projects. You can use them in making proof- of- the concept of new technology, new ways to teach and train, and through the experiences gathered change education content, way to educate and finally education business as a whole. Major findings: Applied research projects can prove the concept- phases on real environment field labs to test technology opportunities and new tools for training purposes. Customer-oriented applied research projects are also excellent for students to make assignments and use new knowledge and content and teachers to test new tools and create new ways to educate. New content and problem-based learning are used in future education modules. This article introduces some case study experiences on customer-oriented digital transformation projects and how gathered knowledge on new digital content and a new way to educate has influenced education. The case study is related to experiences of research projects, customer-oriented field labs/learning environments and education programs of Häme University of Applied Sciences.Keywords: education process, digitalization content, digital tools for education, learning environments, transdisciplinary co-operation
Procedia PDF Downloads 176553 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies
Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon
Abstract:
In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learningKeywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps
Procedia PDF Downloads 126552 Auto Surgical-Emissive Hand
Authors: Abhit Kumar
Abstract:
The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.Keywords: active robots, algorithm, emission, icy steam, TIC, laser
Procedia PDF Downloads 356551 Evaluation of the Performance Measures of Two-Lane Roundabout and Turbo Roundabout with Varying Truck Percentages
Authors: Evangelos Kaisar, Anika Tabassum, Taraneh Ardalan, Majed Al-Ghandour
Abstract:
The economy of any country is dependent on its ability to accommodate the movement and delivery of goods. The demand for goods movement and services increases truck traffic on highways and inside the cities. The livability of most cities is directly affected by the congestion and environmental impacts of trucks, which are the backbone of the urban freight system. Better operation of heavy vehicles on highways and arterials could lead to the network’s efficiency and reliability. In many cases, roundabouts can respond better than at-level intersections to enable traffic operations with increased safety for both cars and heavy vehicles. Recently emerged, the concept of turbo-roundabout is a viable alternative to the two-lane roundabout aiming to improve traffic efficiency. The primary objective of this study is to evaluate the operation and performance level of an at-grade intersection, a conventional two-lane roundabout, and a basic turbo roundabout for freight movements. To analyze and evaluate the performances of the signalized intersections and the roundabouts, micro simulation models were developed PTV VISSIM. The networks chosen for this analysis in this study are to experiment and evaluate changes in the performance of the movement of vehicles with different geometric and flow scenarios. There are several scenarios that were examined when attempting to assess the impacts of various geometric designs on vehicle movements. The overall traffic efficiency depends on the geometric layout of the intersections, which consists of traffic congestion rate, hourly volume, frequency of heavy vehicles, type of road, and the ratio of major-street versus side-street traffic. The traffic performance was determined by evaluating the delay time, number of stops, and queue length of each intersection for varying truck percentages. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. More specifically, it is clear that two-lane roundabouts are seen to have shorter queue lengths compared to signalized intersections and turbo-roundabouts. For instance, considering the scenario where the volume is highest, and the truck movement and left turn movement are maximum, the signalized intersection has 3 times, and the turbo-roundabout has 5 times longer queue length than a two-lane roundabout in major roads. Similarly, on minor roads, signalized intersections and turbo-roundabouts have 11 times longer queue lengths than two-lane roundabouts for the same scenario. As explained from all the developed scenarios, while the traffic demand lowers, the queue lengths of turbo-roundabouts shorten. This proves that turbo roundabouts perform well for low and medium traffic demand. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. Finally, this study provides recommendations on the conditions under which different intersections perform better than each other.Keywords: At-grade intersection, simulation, turbo-roundabout, two-lane roundabout
Procedia PDF Downloads 149550 Hydrodynamic Characterisation of a Hydraulic Flume with Sheared Flow
Authors: Daniel Rowe, Christopher R. Vogel, Richard H. J. Willden
Abstract:
The University of Oxford’s recirculating water flume is a combined wave and current test tank with a 1 m depth, 1.1 m width, and 10 m long working section, and is capable of flow speeds up to 1 ms−1 . This study documents the hydrodynamic characteristics of the facility in preparation for experimental testing of horizontal axis tidal stream turbine models. The turbine to be tested has a rotor diameter of 0.6 m and is a modified version of one of two model-scale turbines tested in previous experimental campaigns. An Acoustic Doppler Velocimeter (ADV) was used to measure the flow at high temporal resolution at various locations throughout the flume, enabling the spatial uniformity and turbulence flow parameters to be investigated. The mean velocity profiles exhibited high levels of spatial uniformity at the design speed of the flume, 0.6 ms−1 , with variations in the three-dimensional velocity components on the order of ±1% at the 95% confidence level, along with a modest streamwise acceleration through the measurement domain, a target 5 m working section of the flume. A high degree of uniformity was also apparent for the turbulence intensity, with values ranging between 1-2% across the intended swept area of the turbine rotor. The integral scales of turbulence exhibited a far higher degree of variation throughout the water column, particularly in the streamwise and vertical scales. This behaviour is believed to be due to the high signal noise content leading to decorrelation in the sampling records. To achieve more realistic levels of vertical velocity shear in the flume, a simple procedure to practically generate target vertical shear profiles in open-channel flows is described. Here, the authors arranged a series of non-uniformly spaced parallel bars placed across the width of the flume and normal to the onset flow. By adjusting the resistance grading across the height of the working section, the downstream profiles could be modified accordingly, characterised by changes in the velocity profile power law exponent, 1/n. Considering the significant temporal variation in a tidal channel, the choice of the exponent denominator, n = 6 and n = 9, effectively provides an achievable range around the much-cited value of n = 7 observed at many tidal sites. The resulting flow profiles, which we intend to use in future turbine tests, have been characterised in detail. The results indicate non-uniform vertical shear across the survey area and reveal substantial corner flows, arising from the differential shear between the target vertical and cross-stream shear profiles throughout the measurement domain. In vertically sheared flow, the rotor-equivalent turbulence intensity ranges between 3.0-3.8% throughout the measurement domain for both bar arrangements, while the streamwise integral length scale grows from a characteristic dimension on the order of the bar width, similar to the flow downstream of a turbulence-generating grid. The experimental tests are well-defined and repeatable and serve as a reference for other researchers who wish to undertake similar investigations.Keywords: acoustic doppler Velocimeter, experimental hydrodynamics, open-channel flow, shear profiles, tidal stream turbines
Procedia PDF Downloads 86549 Efficient Treatment of Azo Dye Wastewater with Simultaneous Energy Generation by Microbial Fuel Cell
Authors: Soumyadeep Bhaduri, Rahul Ghosh, Rahul Shukla, Manaswini Behera
Abstract:
The textile industry consumes a substantial amount of water throughout the processing and production of textile fabrics. The water eventually turns into wastewater, where it acts as an immense damaging nuisance due to its dye content. Wastewater streams contain a percentage ranging from 2.0% to 50.0% of the total weight of dye used, depending on the dye class. The management of dye effluent in textile industries presents a formidable challenge to global sustainability. The current focus is on implementing wastewater treatment technology that enable the recycling of wastewater, reduce energy usage and offset carbon emissions. Microbial fuel cell (MFC) is a device that utilizes microorganisms as a bio-catalyst to effectively treat wastewater while also producing electricity. The MFC harnesses the chemical energy present in wastewater by oxidizing organic compounds in the anodic chamber and reducing an electron acceptor in the cathodic chamber, thereby generating electricity. This research investigates the potential of MFCs to tackle this challenge of azo dye removal with simultaneously generating electricity. Although MFCs are well-established for wastewater treatment, their application in dye decolorization with concurrent electricity generation remains relatively unexplored. This study aims to address this gap by assessing the effectiveness of MFCs as a sustainable solution for treating wastewater containing azo dyes. By harnessing microorganisms as biocatalysts, MFCs offer a promising avenue for environmentally friendly dye effluent management. The performance of MFCs in treating azo dyes and generating electricity was evaluated by optimizing the Chemical Oxygen Demand (COD) and Hydraulic Retention Time (HRT) of influent. COD and HRT values ranged from 1600 mg/L to 2400 mg/L and 5 to 9 days, respectively. Results showed that the maximum open circuit voltage (OCV) reached 648 mV at a COD of 2400 mg/L and HRT of 5 days. Additionally, maximum COD removal of 98% and maximum color removal of 98.91% were achieved at a COD of 1600 mg/L and HRT of 9 days. Furthermore, the study observed a maximum power density of 19.95 W/m3 at a COD of 2400 mg/L and HRT of 5 days. Electrochemical analysis, including linear sweep voltammetry (LSV), cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were done to find out the response current and internal resistance of the system. To optimize pH and dye concentration, pH values were varied from 4 to 10, and dye concentrations ranged from 25 mg/L to 175 mg/L. The highest voltage output of 704 mV was recorded at pH 7, while a dye concentration of 100 mg/L yielded the maximum output of 672 mV. This study demonstrates that MFCs offer an efficient and sustainable solution for treating azo dyes in textile industry wastewater, while concurrently generating electricity. These findings suggest the potential of MFCs to contribute to environmental remediation and sustainable development efforts on a global scale.Keywords: textile wastewater treatment, microbial fuel cell, renewable energy, sustainable wastewater treatment
Procedia PDF Downloads 21548 The Rise and Effects of Social Movement on Ethnic Relations in Malaysia: The Bersih Movement as a Case Study
Authors: Nur Rafeeda Daut
Abstract:
The significance of this paper is to provide an insight on the role of social movement in building stronger ethnic relations in Malaysia. In particular, it focuses on how the BERSIH movement have been able to bring together the different ethnic groups in Malaysia to resist the present political administration that is seen to manipulate the electoral process and oppress the basic freedom of expression of Malaysians. Attention is given on how and why this group emerged and its mobilisation strategies. Malaysia which is a multi-ethnic and multi-religious society gained its independence from the British in 1957. Like many other new nations, it faces the challenges of nation building and governance. From economic issues to racial and religious tension, Malaysia is experiencing high level of corruption and income disparity among the different ethnic groups. The political parties in Malaysia are also divided along ethnic lines. BERSIH which is translated as ‘clean’ is a movement which seeks to reform the current electoral system in Malaysia to ensure equality, justice, free and fair elections. It was originally formed in 2007 as a joint committee that comprised leaders from political parties, civil society groups and NGOs. In April 2010, the coalition developed as an entirely civil society movement unaffiliated to any political party. BERSIH claimed that the electoral roll in Malaysia has been marred by fraud and other irregularities. In 2015, the BERSIH movement organised its biggest rally in Malaysia which also includes 38 other rallies held internationally. Supporters of BERSIH that participated in the demonstration were comprised of all the different ethnic groups in Malaysia. In this paper, two social movement theories are used: resource mobilization theory and political opportunity structure to explain the emergence and mobilization of the BERSIH movement in Malaysia. Based on these two theories, corruption which is believed to have contributed to the income disparity among Malaysians has generated the development of this movement. The rise of re-islamisation values propagated by certain groups in Malaysia and the shift in political leadership has also created political opportunities for this movement to emerge. In line with the political opportunity structure theory, the BERSIH movement will continue to create more opportunities for the empowerment of civil society and the unity of ethnic relations in Malaysia. Comparison is made on the degree of ethnic unity in the country before and after BERSIH was formed. This would include analysing the level of re-islamisation values and also the level of corruption in relation to economic income under the premiership of the former Prime Minister Mahathir and the present Prime Minister Najib Razak. The country has never seen such uprisings like BERSIH where ethnic groups which over the years have been divided by ethnic based political parties and economic disparity joined together with a common goal for equality and fair elections. As such, the BERSIH movement is a unique case where it illustrates the change of political landscape, ethnic relations and civil society in Malaysia.Keywords: ethnic relations, Malaysia, political opportunity structure, resource mobilization theory and social movement
Procedia PDF Downloads 348547 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 176546 Trends in Conservation and Inheritance of Musical Culture of Ethnic Groups: A Case Study of the Akha Music in Chiang Rai Province, Thailand
Authors: Nutthan Inkhong, Sutthiphong Ruangchante
Abstract:
Chiang Rai province is located at the northern border of Thailand. Most of the geography there is the northern continental highlands, and the population has many types of inhabitants, including Thai people, immigrants and ethnic groups such as Akha, Lahu, Lisu, Yao, etc. Most of these ethnic groups migrated from neighbouring countries such as Myanmar, Laos, China, etc. and settled in the mountains. Each ethnic group has their unique traditions, culture, and ways of life, including the musical culture that the ancestors of each ethnic group brought with them. In the present, the Akha have the largest population in the region and still live together in numerous villages in many districts. Thus, Akha musical culture still appears in the community traditions and cultural events of Chiang Rai province regularly. This article presents the situations of Akha musical culture in the present and the predictions for the future. The study method involves the analysis of music information and the related social contexts, which were collected from the fieldwork of ethnomusicological methodology by in-depth interviews, observations, audio and visual recordings, and related documents. The results found that the important persons who are related with Akha musical culture include (1) a musical instrument maker (lives in Mae Chan district) who produces various Akha musical instruments, including gourd mouth organs, Akha drums, two-way flutes, three-hole flutes, Jew’s harps (the sound of teenage love), buffalo horns (the sound symbol of hunting) and bird call instruments (the imitation of bird sounds), (2) a folk philosopher (lives in Mae Pha Luang district) who can teach music to the new generation of Akha people as well as lecture and demonstrate music to academics and tourists, and (3) a community leader (lives in Mae Chan district) who conserves Akha performances, singing and music through various activities of the students in an informal school. Because of the changes to the social contexts and ways of life of the Akha people, such as the educational system, religion, social media, etc., including the popularity of both Thai and international popular music among the new generation of Akha people, changes to and the fading away of Akha musical culture in the future may likely occur. Therefore, the conservation and inheritance of Akha music is an issue that should be resolved quickly. This primary study leads to the next step of the ethnomusicological work and plays a part in preventing or reducing the problems impacting Akha musical culture survival by the recording of Akha music in all of its dimensions, such as producing musical instruments, playing musical instruments, analysis of tuning systems, recording Akha music as musical notation using symbols, researching related social contexts, etc. and the transcription of this information to create lessons that can be returned to the Akha community.Keywords: Akha music, Chiang Rai, ethnic music in Thailand, ethnomusicology
Procedia PDF Downloads 161545 MEIOSIS: Museum Specimens Shed Light In Biodiversity Shrinkage
Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki
Abstract:
Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third ‘universal’ response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyze trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ and a final dataset containing 31922 samples was used in analysis. Setting time as a smoother variable, species identity as a random factor and the length of right-wing size (as a proxy for body size) as the response variable, we ran a global model for a maximum period of 170 years (1840 – 2010). We also constructed individual models for each family (Pieridae, Lycaenidae, Hesperiidae, Nymphalidae, Papilionidae). All models confirmed our initial hypothesis and resulted in a decreasing trend of the wing length over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.Keywords: butterflies, shrinking body size, museum specimens, climate change
Procedia PDF Downloads 10544 Emerging Issues for Global Impact of Foreign Institutional Investors (FII) on Indian Economy
Authors: Kamlesh Shashikant Dave
Abstract:
The global financial crisis is rooted in the sub-prime crisis in U.S.A. During the boom years, mortgage brokers attracted by the big commission, encouraged buyers with poor credit to accept housing mortgages with little or no down payment and without credit check. A combination of low interest rates and large inflow of foreign funds during the booming years helped the banks to create easy credit conditions for many years. Banks lent money on the assumptions that housing price would continue to rise. Also the real estate bubble encouraged the demand for houses as financial assets .Banks and financial institutions later repackaged these debts with other high risk debts and sold them to worldwide investors creating financial instruments called collateral debt obligations (CDOs). With the rise in interest rate, mortgage payments rose and defaults among the subprime category of borrowers increased accordingly. Through the securitization of mortgage payments, a recession developed in the housing sector and consequently it was transmitted to the entire US economy and rest of the world. The financial credit crisis has moved the US and the global economy into recession. Indian economy has also affected by the spill over effects of the global financial crisis. Great saving habit among people, strong fundamentals, strong conservative and regulatory regime have saved Indian economy from going out of gear, though significant parts of the economy have slowed down. Industrial activity, particularly in the manufacturing and infrastructure sectors decelerated. The service sector too, slow in construction, transport, trade, communication, hotels and restaurants sub sectors. The financial crisis has some adverse impact on the IT sector. Exports had declined in absolute terms in October. Higher inputs costs and dampened demand have dented corporate margins while the uncertainty surrounding the crisis has affected business confidence. To summarize, reckless subprime lending, loose monetary policy of US, expansion of financial derivatives beyond acceptable norms and greed of Wall Street has led to this exceptional global financial and economic crisis. Thus, the global credit crisis of 2008 highlights the need to redesign both the global and domestic financial regulatory systems not only to properly address systematic risk but also to support its proper functioning (i.e financial stability).Such design requires: 1) Well managed financial institutions with effective corporate governance and risk management system 2) Disclosure requirements sufficient to support market discipline. 3)Proper mechanisms for resolving problem institution and 4) Mechanisms to protect financial services consumers in the event of financial institutions failure.Keywords: FIIs, BSE, sensex, global impact
Procedia PDF Downloads 441543 Theoretical Study of the Photophysical Properties and Potential Use of Pseudo-Hemi-Indigo Derivatives as Molecular Logic Gates
Authors: Christina Eleftheria Tzeliou, Demeter Tzeli
Abstract:
Introduction: Molecular Logic Gates (MLGs) are molecular machines that can perform complex work, such as solving logic operations. Molecular switches, which are molecules that can experience chemical changes are examples of successful types of MLGs. Recently, Quintana-Romero and Ariza-Castolo studied experimentally six stable pseudo-hemi-indigo-derived MLGs capable of solving complex logic operations. The MLG design relies on a molecular switch that experiences Z and E isomerism, thus the molecular switch's axis has to be a double bond. The hemi-indigo structure was preferred for the assembly of molecular switches due to its interaction with visible light. Z and E pseudo-hemi-indigo isomers can also be utilized for selective isomerization as they have distinct absorption spectra. Methodology: Here, the photophysical properties of pseudo-hemi-indigo derivatives are examined, i.e., derivatives of molecule 1 with anthracene, naphthalene, phenanthrene, pyrene, and pyrrole. In conjunction with some trials that were conducted, the level of theory mentioned subsequently was determined. The structures under study were optimized in both cis and trans conformations at the PBE0/6-31G(d,p) level of theory. The absorption spectra of the structures were calculated at PBE0/DEF2TZVP. In all cases, the absorption spectra of the studied systems were calculated including up to 50 singlet- and triplet-spin excited electronic states. Transition states (cis → cis, cis → trans, and trans → trans) were obtained in cases where it was possible, with PBE0/6-31G(d,p) for the optimization of the transition states and PBE0/DEF2TZVP for the respective absorption spectra. Emission spectra were obtained for the first singlet state of each molecule in cis both and trans conformations in PBE0/DEF2TZVP as well. All studies were performed in chloroform solvent that was added as a dielectric constant and the polarizable continuum model was also employed. Findings: Shifts of up to 25 nm are observed in the absorption spectra due to cis-trans isomerization, while the transition state is shifted up to about 150 nm. The electron density distribution is also examined, where charge transfer and electron transfer phenomena are observed regarding the three excitations of interest, i.e., H-1 → L, H → L and H → L+1. Emission spectra calculations were also carried out at PBE0/DEF2TZVP for the complete investigation of these molecules. Using protonation as input, selected molecules act as MLGs. Conclusion: Theoretical data so far indicate that both cis-trans isomerization, and cis-cis and trans-trans conformer isomerization affect the UV-visible absorption and emission spectra. Specifically, shifts of up to 30 nm are observed, while the transition state is shifted up to about 150 nm in cis-cis isomerization. The computational data obtained are in agreement with available experimental data, which have predicted that the pyrrole derivative is a MLG at 445 nm and 400 nm using protonation as input, while the anthracene derivative is a MLG that operates at 445 nm using protonation as input. Finally, it was found that selected molecules are candidates as MLG using protonation and light as inputs. These MLGs could be used as chemical sensors or as particular intracellular indicators, among several other applications. Acknowledgements: The author acknowledges the Hellenic Foundation for Research and Innovation for the financial support of this project (Fellowship Number: 21006).Keywords: absorption spectra, DFT calculations, isomerization, molecular logic gates
Procedia PDF Downloads 21542 A Cross Cultural Study of Jewish and Arab Listeners: Perception of Harmonic Sequences
Authors: Roni Granot
Abstract:
Musical intervals are the building blocks of melody and harmony. Intervals differ in terms of their size, direction, or quality as consonants or dissonants. In Western music, perceptual dissonance is mostly associated with the sensation of beats or periodicity, whereas cognitive dissonance is associated with rules of harmony and voice leading. These two perceptions can be studied separately in musical cultures which include melodic with little or no harmonic structures. In the Arab musical system, there is a number of different quarter- tone intervals creating various combinations of consonant and dissonant intervals. While traditional Arab music includes only melody, today’s Arab pop music includes harmonization of songs, often using typical Western harmonic sequences. Therefore, the Arab population in Israel presents an interesting case which enables us to examine the distinction between perceptual and cognitive dissonance. In the current study, we compared the responses of 34 Jewish Western listeners and 56 Arab listeners to two types of stimuli and their relationships: Harmonic sequences and isolated harmonic intervals (dyads). Harmonic sequences were presented in synthesized piano tones and represented five levels of Harmonic prototypicality (Tonic ending; Tonic ending with half flattened third; Deceptive cadence; Half cadence; and Dissonant unrelated ending) and were rated on 5-point scales of closure and surprise. Here we report only findings related to the harmonic sequences. One-way repeated measures ANOVA with one within subjects factor with five levels (Type of sequence) and one between- subjects factor (Musical background) indicates a main effect of Type of sequence for surprise ratings F (4, 85) = 51 p<.001, and for closure ratings F (4, 78) 9.54 p < .001, no main effect of Background on either surprise or closure ratings, and a marginally significant Type X Background interaction for surprise F (4, 352) = 6.05 p = .069 and closure ratings F (4, 324) 3.89 p < .01). Planned comparisons show that the interaction of Type of sequence X Background center around surprise and closure ratings of the regular versus the half- flattened third tonic and the deceptive versus the half cadence. The half- flattened third tonic is rated as less surprising and as demanding less continuation than the regular tonic by the Arab listeners as compared to the Western listeners. In addition, the half cadence is rated as more surprising but demanding less continuation than the deceptive cadence in the Arab listeners as compared to the Western listeners. Together, our results suggest that despite the vast exposure of Arab listeners to Western harmony, sensitivity to harmonic rules seems to be partial with preference to oriental sonorities such as half flattened third. In addition, the percept of directionality which demands sensitivity to the level on which closure is obtained and which is strongly entrenched in Western harmony, may not be fully integrated into the Arab listeners’ mental harmonic scheme. Results will be discussed in terms of broad differences between Western and Eastern aesthetic ideals.Keywords: harmony, cross cultural, Arab music, closure
Procedia PDF Downloads 275541 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 184540 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 54539 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach
Authors: Dominik Kowitzke
Abstract:
Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.Keywords: empty nests, environment, Germany, households, over housing
Procedia PDF Downloads 171538 A Comparative Study of the Impact of the Total Fertility Rate (TFR) on Trends in the Second Demographic Transition in Rwanda
Authors: Etienne Gatera
Abstract:
Many studies have been conducted on SDT. Most of them focus on developed countries because of influencing factors such as; education, health, labor force, female labor force participation, industrialization, urbanization and migration. However, this thesis project paper aims to assess the impact of the total fertility rate (TFR) on the trends of the SDR in Rwanda. We will mainly be based in Rwanda after the 1994 genocide. Rwanda is located in East Africa, with approximately 13 million inhabitants. Thus, after the 1994 Tutsi genocide. The population growth rate exploded out of control with 6.17 children per woman in 1995. However, it's declined to 4.2 in 2014-2015 and declining to 4.1% in 2019-2020. Respectively with 3.4 children per woman in urban areas and 4.3 in rural areas. According to the National Institute of Statistics of Rwanda. Rwanda's population is expected to continue to grow for the rest of the century and reach 33.35 million people in 2099, with 2.1 children per woman in 2050. However, this project document aims to demonstrate the impact of the TFR on SDT trends in Rwanda. Thus, the decline in the TFR in Rwanda began with the introduction of family planning practices, which now account for 47.5% in 2019. Childbearing with three children for rural women compared to two children in the city, the increase in Divorce and separation caused by the behavior called "Kuza n'ijoro" or "coming at night" similar to cohabitation in developed countries. The decline in remarriage is caused by single mothers behavior who prefer to raise their children rather than remarry. Therefore, the study used probability sampling with (Stratified random sampling) method with a survey questionnaire of 1067 respondents in the 5 Districts (3 in rural areas and two in urban areas), with the target group of women Age between 15-49. The study demonstrated that the age of marriage in rural areas is two years higher than in urban areas. Divorce is more common in urban is with 6.2% with 5.2% in rural areas. However, separation is more common in rural areas than in urban areas, with a lower rate of 3%, due to the higher system called "Kuza n'ijoro" or "come at night", similar to cohabitation in developed countries. The study revealed that more than 85% of divorced people prefer to remain single, which confirms the low remarriage rate. Childbearing has started to decrease, especially for young singles in urban areas, due to the economic situation, with national statistics showing that unemployment in the youth community is still 16% higher. Therefore, the study concluded by confirming the hypothesis based on the results of the TFR indicators such as marriage, remarriage, divorce, separation, divorce, Kuza n'ijoro, childbearing] and abortion. The study consists of four sections, an introduction and background, a review of the literature, a description of the data and methodology, an analysis of the data, discussion results and a conclusion.Keywords: Kuza n'ijoro, Rwanda, second demographic transition (SDT), total fertility rate (TFR)
Procedia PDF Downloads 170537 Collaborative Program Student Community Service as a New Approach for Development in Rural Area in Case of Western Java
Authors: Brian Yulianto, Syachrial, Saeful Aziz, Anggita Clara Shinta
Abstract:
Indonesia, with a population of about two hundred and fifty million people in quantity, indicates the outstanding wealth of human resources. Hundreds of millions of the population scattered in various communities in various regions in Indonesia with the different characteristics of economic, social and unique culture. Broadly speaking, the community in Indonesia is divided into two classes, namely urban communities and rural communities. The rural communities characterized by low potential and management of natural and human resources, limited access of development, and lack of social and economic infrastructure, and scattered and isolated population. West Java is one of the provinces with the largest population in Indonesia. Based on data from the Central Bureau of Statistics in 2015 the number of population in West Java reached 46.7096 million souls spread over 18 districts and 9 cities. The big difference in geographical and social conditions of people in West Java from one region to another, especially the south to the north causing the gap is high. It is closely related to the flow of investment to promote the area. Poverty and underdevelopment are the classic problems that occur on a massive scale in the region as the effects of inequity in development. South Cianjur and Tasikmalaya area South became one of the portraits area where the existing potential has not been capable of prospering society. Tri Dharma College not only define the College as a pioneer implementation of education and research to improve the quality of human resources but also demanded to be a pioneer in the development through the concept of public service. Bandung Institute of Technology as one of the institutions of higher education to implement community service system through collaborative community work program "one of the university community" as one approach to developing villages. The program is based Community Service, where students are not only required to be able to take part in community service, but also able to develop a community development strategy that is comprehensive and integrity in cooperation with government agencies and non-government related as a real form of effort alignment potential, position and role from various parties. Areas of western Java in particular have high poverty rates and disparity. On the other hand, there are three fundamental pillars in the development of rural communities, namely economic development, community development, and the integrated infrastructure development. These pillars require the commitment of all components of community, including the students and colleges for upholding success. College’s community program is one of the approaches in the development of rural communities. ITB is committed to implement as one form of student community service as community-college programs that integrate all elements of the community which is called Kuliah Kerja Nyata-Thematic.Keywords: development in rural area, collaborative, student community service, Kuliah Kerja Nyata-Thematic ITB
Procedia PDF Downloads 222536 Neuropsychological Aspects in Adolescents Victims of Sexual Violence with Post-Traumatic Stress Disorder
Authors: Fernanda Mary R. G. Da Silva, Adriana C. F. Mozzambani, Marcelo F. Mello
Abstract:
Introduction: Sexual assault against children and adolescents is a public health problem with serious consequences on their quality of life, especially for those who develop post-traumatic stress disorder (PTSD). The broad literature in this research area points to greater losses in verbal learning, explicit memory, speed of information processing, attention and executive functioning in PTSD. Objective: To compare the neuropsychological functions of adolescents from 14 to 17 years of age, victims of sexual violence with PTSD with those of healthy controls. Methodology: Application of a neuropsychological battery composed of the following subtests: WASI vocabulary and matrix reasoning; Digit subtests (WISC-IV); verbal auditory learning test RAVLT; Spatial Span subtest of the WMS - III scale; abbreviated version of the Wisconsin test; concentrated attention test - D2; prospective memory subtest of the NEUPSILIN scale; five-digit test - FDT and the Stroop test (Trenerry version) in adolescents with a history of sexual violence in the previous six months, referred to the Prove (Violence Care and Research Program of the Federal University of São Paulo), for further treatment. Results: The results showed a deficit in the word coding process in the RAVLT test, with impairment in A3 (p = 0.004) and A4 (p = 0.016) measures, which compromises the verbal learning process (p = 0.010) and the verbal recognition memory (p = 0.012), seeming to present a worse performance in the acquisition of verbal information that depends on the support of the attentional system. A worse performance was found in list B (p = 0.047), a lower priming effect p = 0.026, that is, lower evocation index of the initial words presented and less perseveration (p = 0.002), repeated words. Therefore, there seems to be a failure in the creation of strategies that help the mnemonic process of retention of the verbal information necessary for learning. Sustained attention was found to be impaired, with greater loss of setting in the Wisconsin test (p = 0.023), a lower rate of correct responses in stage C of the Stroop test (p = 0.023) and, consequently, a higher index of erroneous responses in C of the Stroop test (p = 0.023), besides more type II errors in the D2 test (p = 0.008). A higher incidence of total errors was observed in the reading stage of the FDT test p = 0.002, which suggests fatigue in the execution of the task. Performance is compromised in executive functions in the cognitive flexibility ability, suggesting a higher index of total errors in the alternating step of the FDT test (p = 0.009), as well as a greater number of persevering errors in the Wisconsin test (p = 0.004). Conclusion: The data from this study suggest that sexual violence and PTSD cause significant impairment in the neuropsychological functions of adolescents, evidencing risk to quality of life in stages that are fundamental for the development of learning and cognition.Keywords: adolescents, neuropsychological functions, PTSD, sexual violence
Procedia PDF Downloads 135535 Antimicrobial and Anti-Biofilm Activity of Non-Thermal Plasma
Authors: Jan Masak, Eva Kvasnickova, Vladimir Scholtz, Olga Matatkova, Marketa Valkova, Alena Cejkova
Abstract:
Microbial colonization of medical instruments, catheters, implants, etc. is a serious problem in the spread of nosocomial infections. Biofilms exhibit enormous resistance to environment. The resistance of biofilm populations to antibiotic or biocides often increases by two to three orders of magnitude in comparison with suspension populations. Subjects of interests are substances or physical processes that primarily cause the destruction of biofilm, while the released cells can be killed by existing antibiotics. In addition, agents that do not have a strong lethal effect do not cause such a significant selection pressure to further enhance resistance. Non-thermal plasma (NTP) is defined as neutral, ionized gas composed of particles (photons, electrons, positive and negative ions, free radicals and excited or non-excited molecules) which are in permanent interaction. In this work, the effect of NTP generated by the cometary corona with a metallic grid on the formation and stability of biofilm and metabolic activity of cells in biofilm was studied. NTP was applied on biofilm populations of Staphylococcus epidermidis DBM 3179, Pseudomonas aeruginosa DBM 3081, DBM 3777, ATCC 15442 and ATCC 10145, Escherichia coli DBM 3125 and Candida albicans DBM 2164 grown on solid media on Petri dishes and on the titanium alloy (Ti6Al4V) surface used for the production joint replacements. Erythromycin (for S. epidermidis), polymyxin B (for E. coli and P. aeruginosa), amphotericin B (for C. albicans) and ceftazidime (for P. aeruginosa) were used to study the combined effect of NTP and antibiotics. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Fluorescence microscopy was applied to visualize the biofilm on the surface of the titanium alloy; SYTO 13 was used as a fluorescence probe to stain cells in the biofilm. It has been shown that biofilm populations of all studied microorganisms are very sensitive to the type of used NTP. The inhibition zone of biofilm recorded after 60 minutes exposure to NTP exceeded 20 cm², except P. aeruginosa DBM 3777 and ATCC 10145, where it was about 9 cm². Also metabolic activity of cells in biofilm differed for individual microbial strains. High sensitivity to NTP was observed in S. epidermidis, in which the metabolic activity of biofilm decreased after 30 minutes of NTP exposure to 15% and after 60 minutes to 1%. Conversely, the metabolic activity of cells of C. albicans decreased to 53% after 30 minutes of NTP exposure. Nevertheless, this result can be considered very good. Suitable combinations of exposure time of NTP and the concentration of antibiotic achieved in most cases a remarkable synergic effect on the reduction of the metabolic activity of the cells of the biofilm. For example, in the case of P. aeruginosa DBM 3777, a combination of 30 minutes of NTP with 1 mg/l of ceftazidime resulted in a decrease metabolic activity below 4%.Keywords: anti-biofilm activity, antibiotic, non-thermal plasma, opportunistic pathogens
Procedia PDF Downloads 184534 Climate Change Law and Transnational Corporations
Authors: Manuel Jose Oyson
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations
Procedia PDF Downloads 350533 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo
Authors: Shpetim Rezniqi
Abstract:
Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.Keywords: globalisation, finance, crisis, recomandation, bank, credits
Procedia PDF Downloads 389532 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 42531 Co-Smoldered Digestate Ash as Additive for Anaerobic Digestion of Berry Fruit Waste: Stability and Enhanced Production Rate
Authors: Arinze Ezieke, Antonio Serrano, William Clarke, Denys Villa-Gomez
Abstract:
Berry cultivation results in discharge of high organic strength putrescible solid waste which potentially contributes to environmental degradation, making it imperative to assess options for its complete management. Anaerobic digestion (AD) could be an ideal option when the target is energy generation; however, due to berry fruit characteristics high carbohydrate composition, the technology could be limited by its high alkalinity requirement which suggests dosing of additives such as buffers and trace elements supplement. Overcoming this limitation in an economically viable way could entail replacement of synthetic additives with recycled by-product waste. Consequently, ash from co-smouldering of high COD characteristic AD digestate and coco-coir could be a promising material to be used to enhance the AD of berry fruit waste, given its characteristic high pH, alkalinity and metal concentrations which is typical of synthetic additives. Therefore, the aim of the research was to evaluate the stability and process performance from the AD of BFW when ash from co-smoldered digestate and coir are supplemented as alkalinity and trace elements (TEs) source. Series of batch experiments were performed to ascertain the necessity for alkalinity addition and to see whether the alkalinity and metals in the co-smouldered digestate ash can provide the necessary buffer and TEs for AD of berry fruit waste. Triplicate assays were performed in batch systems following I/S of 2 (in VS), using serum bottles (160 mL) sealed and placed in a heated room (35±0.5 °C), after creating anaerobic conditions. Control experiment contained inoculum and substrates only, and inoculum, substrate and NaHCO3 for optimal total alkalinity concentration and TEs assays, respectively. Total alkalinity concentration refers to alkalinity of inoculum and the additives. The alkalinity and TE potential of the ash were evaluated by supplementing ash (22.574 g/kg) of equivalent total alkalinity concentration to that of the pre-determined optimal from NaHCO3, and by dosing ash (0.012 – 7.574 g/kg) of varying concentrations of specific essential TEs (Co, Fe, Ni, Se), respectively. The result showed a stable process at all examined conditions. Supplementation of 745 mg/L CaCO3 NaHCO3 resulted to an optimum TAC of 2000 mg/L CaCO3. Equivalent ash supplementation of 22.574 g/kg allowed the achievement of this pre-determined optimum total alkalinity concentration, resulting to a stable process with a 92% increase in the methane production rate (323 versus 168 mL CH4/ (gVS.d)), but a 36% reduction in the cumulative methane production (103 versus 161 mL CH4/gVS). Addition of ashes at incremental dosage as TEs source resulted to a reduction in the Cumulative methane production, with the highest dosage of 7.574 g/kg having the highest effect of -23.5%; however, the seemingly immediate bioavailability of TE at this high dosage allowed for a +15% increase in the methane production rate. With an increased methane production rate, the results demonstrated that the ash at high dosages could be an effective supplementary material for either a buffered or none buffered berry fruit waste AD system.Keywords: anaerobic digestion, alkalinity, co-smoldered digestate ash, trace elements
Procedia PDF Downloads 122530 Averting a Financial Crisis through Regulation, Including Legislation
Authors: Maria Krambia-Kapardis, Andreas Kapardis
Abstract:
The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.Keywords: financial crisis, legislation, regulation, financial regulation
Procedia PDF Downloads 398529 Serum Concentration of the CCL7 Chemokine in Diabetic Pregnant Women during Pregnancy until the Postpartum Period
Authors: Fernanda Piculo, Giovana Vesentini, Gabriela Marini, Debora Cristina Damasceno, Angelica Mercia Pascon Barbosa, Marilza Vieira Cunha Rudge
Abstract:
Introduction: Women with previous gestational diabetes mellitus (GDM) were significantly more likely to have urinary incontinence (UI) and pelvic floor muscle dysfunction compared to non-diabetic women two years after a cesarean section. Additional results demonstrated that induced diabetes causes detrimental effects on pregnant rat urethral muscle. These results indicate the need for exploration of the mechanistic role of a recovery factor in female UI. Chemokine ligand 7 (CCL7) was significantly over expressed in rat serum, urethral and vaginal tissues immediately following induction of stress UI in a rat model simulating birth trauma. CCL7 over expression has shown potency for stimulating targeted stem cell migration and provide a translational link (clinical measurement) which further provide opportunities for treatment. The aim of this study was to investigate the CCL7 levels profile in diabetic pregnant women with urinary incontinence during pregnancy over the first year postpartum. Methods: This study was conducted in the Perinatal Diabetes Research Center of the Botucatu Medical School/UNESP, and was approved by the Research Ethics Committee of the Institution (CAAE: 20639813.0.0000.5411). The diagnosis of GDM was established between 24th and 28th gestational weeks, by the 75 g-OGTT test according to ADA’s criteria. Urinary incontinence was defined according to the International Continence Society and the CCL7 levels was measured by ELISA (R&D Systems, Catalog Number DCC700). Two hundred twelve women were classified into four study groups: normoglycemic continent (NC), normoglycemic incontinent (NI), diabetic continent (DC) and diabetic incontinent (DI). They were evaluated at six-time-points: 12-18, 24-28 and 34-38 gestational weeks, 24-48 hours, 6 weeks and 6-12 months postpartum. Results: At 12-18 weeks, it was possible to consider only two groups, continent and incontinent, because at this early gestational period has not yet been the diagnosis of GDM. The group with GDM and UI (DI group) showed lower levels of CCL7 in all time points during pregnancy and postpartum, compared to normoglycemic groups (NC and NI), indicating that these women have not recovered from child birth induced UI during the 6-12 months postpartum compared to their controls, and that the progression of UI and/or lack of recovery throughout the first postpartum year can be related with lower levels of CCL7. Instead, serum CCL7 was significantly increased in the NC group. Taken together, these findings of overexpression of CCL7 in the NC group and decreased levels in the DI group, could confirm that diabetes delays the recovery from child birth induced UI, and that CCL7 could potentially be used as a serum marker of injury. Conclusion: This study demonstrates lower levels of CCL7 in the DI group during pregnancy and postpartum and suggests that the progression of UI in diabetic women and/or lack of recovery throughout the first postpartum year can be related with low levels of CCL7. This provides a translational potential where CCL7 measurement could be used as a surrogate for injury after delivery. Successful controlled CCL7 mediated stem cell homing to the lower urinary tract could one day introduce the potential for non-operative treatment or prevention of stress urinary incontinence.Keywords: CCL7, gestational diabetes, pregnancy, urinary incontinence
Procedia PDF Downloads 336