Search results for: health data
1234 Denial among Women Living with Cancer: An Exploratory Study to Understand the Consequences of Cancer and the Denial Mechanism
Authors: Judith Partouche-Sebban, Saeedeh Rezaee Vessal
Abstract:
Because of the rising number of new cases of cancer, especially among women, it is more than essential to better understand how women experience cancer in order to bring them adapted to support and care and enhance their well-being and patient experience. Cancer stands for a traumatic experience in which the diagnosis, its medical treatments, and the related side effects lead to deep physical and psychological changes that may arouse considerable stress and anxiety. In order to reduce these negative emotions, women tend to use various defense mechanisms, among which denial has been defined as the most frequent mechanism used by breast cancer patients. This study aims to better understand the consequences of the experience of cancer and their link with the adoption of a denial strategy. The empirical research was done among female cancer survivors in France. Since the topic of this study is relatively unexplored, a qualitative methodology and open-ended interviews were employed. In total, 25 semi-directive interviews were conducted with a female with different cancers, different stages of treatment, and different ages. A systematic inductive method was performed to analyze data. The content analysis enabled to highlight three different denial-related behaviors among women with cancer, which serve a self-protective function. First, women who expressed high levels of anxiety confessed they tended to completely deny the existence of their cancer immediately after the diagnosis of their illness. These women mainly exhibit many fears and a deep distrust toward the medical context and professionals. This coping mechanism is defined by the patient as being unconscious. Second, other women deliberately decided to deny partial information about their cancer, whether this information is related to the stages of the illness, the emotional consequences, or the behavioral consequences of the illness. These women use this strategy as a way to avoid the reality of the illness and its impact on the different aspects of their life as if cancer does not exist. Third, some women tend to reinterpret and give meaning to their cancer as a way to reduce its impact on their life. To this end, they may use magical thinking or positive reframing, or reinterpretation. Because denial may lead to delays in medical treatments, this topic deserves a deep investigation, especially in the context of oncology. As denial is defined as a specific defense mechanism, this study contributes to the existing literature in service marketing which focuses on emotions and emotional regulation in healthcare services which is a crucial issue. Moreover, this study has several managerial implications for healthcare professionals who interact with patients in order to implement better care and support for the patients.Keywords: cancer, coping mechanisms, denial, healthcare services
Procedia PDF Downloads 881233 The Usage of Negative Emotive Words in Twitter
Authors: Martina Katalin Szabó, István Üveges
Abstract:
In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.Keywords: gender differences, negative emotive words, semantic changes over time, twitter
Procedia PDF Downloads 2071232 Portuguese Teachers in Bilingual Schools in Brazil: Professional Identities and Intercultural Conflicts
Authors: Antonieta Heyden Megale
Abstract:
With the advent of globalization, the social, cultural and linguistic situation of the whole world has changed. In this scenario, the teaching of English, in Brazil, has become a booming business and the belief that this language is essential to a successful life is played by the media that sees it as a commodity and spares no effort to sell it. In this context, it has become evident the growth of bilingual and international schools that have English and Portuguese as languages of instruction. According to federal legislation, all schools in the country must follow the Curriculum guidelines proposed by the Ministry of Education of Brazil. It is then mandatory that, in addition to the specific foreign curriculum an international school subscribes to, it must also teach all subjects of the official minimum curriculum and these subjects have to be taught in Portuguese. It is important to emphasize that, in these schools, English is the most prestigious language. Therefore, firstly, Brazilian teachers who teach Portuguese in such contexts find themselves in a situation in which they teach in a low-status language. Secondly, because such teachers’ actions are guided by a different cultural matrix, which differs considerably from Anglo-Saxon values and beliefs, they often experience intercultural conflict in their workplace. Taking it consideration, this research, focusing on the trajectories of a specific group of Brazilian teachers of Portuguese in international and bilingual schools located in the city of São Paulo, intends to analyze how they discursively represent their own professional identities and practices. More specifically the objectives of this research are to understand, from the perspective of the investigated teachers, how they (i) rebuilt narratively their professional careers and explain the factors that led them to an international or to an immersion bilingual school; (ii) position themselves with respect to their linguistic repertoire; (iii) interpret the intercultural practices they are involved with in school and (v) position themselves by foregrounding categories to determine their membership in the group of Portuguese teachers. We have worked with these teachers’ autobiographical narratives. The autobiographical approach assumes that the stories told by teachers are systems of meaning involved in the production of identities and subjectivities in the context of power relations. The teachers' narratives were elicited by the following trigger: "I would like you to tell me how you became a teacher in a bilingual/international school and what your impressions are about your work and about the context in which it is inserted". These narratives were produced orally, recorded, and transcribed for analysis. The teachers were also invited to draw their "linguistic portraits". The theoretical concepts of positioning and the indexical cues were taken into consideration in data analysis. The narratives produced by the teachers point to intercultural conflicts related to their expectations and representations of others, which are never neutral or objective truths but discursive constructions.Keywords: bilingual schools, identity, interculturality, narrative
Procedia PDF Downloads 3391231 Hydrodynamic Analysis of Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters
Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut
Abstract:
- In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.Keywords: underwater vehicles, submarine, autonomous underwater vehicles, auv, computational fluid dynamics, flow fields, pressure, turbulence, drag
Procedia PDF Downloads 801230 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 2911229 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 2311228 The Jury System in the Courts in Nineteenth Century Assam: Power Negotiations and Politics in an Institutional Rubric of a Colonial Regime
Authors: Jahnu Bharadwaj
Abstract:
In the third decade of the 19th century, the political landscape of the Brahmaputra valley changed at many levels. The establishment of East India Company’s authority in ‘Assam’ was complete with the Treaty of Yandaboo. The whole phenomenon of the annexation of Assam into the British Indian Empire led to several administrative reorganizations and reforms under the new regime. British colonial rule was distinguished by new systems and institutions of governance. This paper broadly looks at the historical proceedings of the introduction of the Rule of Law and a new legal structure in the region of ‘Assam’. With numerous archival data, this paper seeks to chiefly examine the trajectory of an important element in the new legal apparatus, i.e. the jury in the British criminal courts introduced in the newly annexed region. Right from the beginning of colonial legal innovations with the establishment of the panchayats and the parallel courts in Assam, the jury became an important element in the structure of the judicial system. In both civil and criminal courts, the jury was to be formed from the learned members of the ‘native’ society. In the working of the criminal court, the jury became significantly powerful and influential. The structure meant that the judge or the British authority eventually had no compulsion to obey the verdict of the jury. However, the structure also provided that the jury had a considerable say in matters of the court proceedings, and their verdict had significant weight. This study seeks to look at certain important criminal cases pertaining to the nineteenth century and the functioning of the jury in those cases. The power play at display between the British officials, judges and the members of the jury would be helpful in highlighting the important deliberations and politics that were in place in the functioning of the British criminal legal apparatus in colonial Assam. The working and the politics of the members of the jury in many cases exerted considerable influence in the court proceedings. The interesting negotiations of the British officials or judges also present us with vital insights. By reflecting on the difficulty that the British officials and judges felt with the considerable space for opinion and difference that was provided to important members of the local society, this paper seeks to locate, with evidence, the racial politics at play within the official formulations of the legal apparatus in the colonial rule in Assam. This study seeks to argue that despite the rhetorical claims of legal equality within the Empire, racial consideration and racial politics was a reality even in the making of the structure itself. This in a way helps to enrich our ideas about the racial elements at work in numerous layers sustaining the colonial regime.Keywords: criminal courts, colonial regime, jury, race
Procedia PDF Downloads 1761227 Quality Improvement of the Sand Moulding Process in Foundries Using Six Sigma Technique
Authors: Cindy Sithole, Didier Nyembwe, Peter Olubambi
Abstract:
The sand casting process involves pattern making, mould making, metal pouring and shake out. Every step in the sand moulding process is very critical for production of good quality castings. However, waste generated during the sand moulding operation and lack of quality are matters that influences performance inefficiencies and lack of competitiveness in South African foundries. Defects produced from the sand moulding process are only visible in the final product (casting) which results in increased number of scrap, reduced sales and increases cost in the foundry. The purpose of this Research is to propose six sigma technique (DMAIC, Define, Measure, Analyze, Improve and Control) intervention in sand moulding foundries and to reduce variation caused by deficiencies in the sand moulding process in South African foundries. Its objective is to create sustainability and enhance productivity in the South African foundry industry. Six sigma is a data driven method to process improvement that aims to eliminate variation in business processes using statistical control methods .Six sigma focuses on business performance improvement through quality initiative using the seven basic tools of quality by Ishikawa. The objectives of six sigma are to eliminate features that affects productivity, profit and meeting customers’ demands. Six sigma has become one of the most important tools/techniques for attaining competitive advantage. Competitive advantage for sand casting foundries in South Africa means improved plant maintenance processes, improved product quality and proper utilization of resources especially scarce resources. Defects such as sand inclusion, Flashes and sand burn on were some of the defects that were identified as resulting from the sand moulding process inefficiencies using six sigma technique. The courses were we found to be wrong design of the mould due to the pattern used and poor ramming of the moulding sand in a foundry. Six sigma tools such as the voice of customer, the Fishbone, the voice of the process and process mapping were used to define the problem in the foundry and to outline the critical to quality elements. The SIPOC (Supplier Input Process Output Customer) Diagram was also employed to ensure that the material and process parameters were achieved to ensure quality improvement in a foundry. The process capability of the sand moulding process was measured to understand the current performance to enable improvement. The Expected results of this research are; reduced sand moulding process variation, increased productivity and competitive advantage.Keywords: defects, foundries, quality improvement, sand moulding, six sigma (DMAIC)
Procedia PDF Downloads 1971226 Comfort Evaluation of Summer Knitted Clothes of Tencel and Cotton Fabrics
Authors: Mona Mohamed Shawkt Ragab, Heba Mohamed Darwish
Abstract:
Context: Comfort properties of garments are crucial for the wearer, and with the increasing demand for cotton fabric, there is a need to explore alternative fabrics that can offer similar or superior comfort properties. This study focuses on comparing the comfort properties of tencel/cotton single jersey fabric and cotton single jersey fabric, with the aim of identifying fabrics that are more suitable for summer clothes. Research Aim: The aim of this study is to evaluate the comfort properties of tencel/cotton single jersey fabric and cotton single jersey fabric, with the goal of identifying fabrics that can serve as alternatives to cotton, considering their comfort properties for summer clothing. Methodology: An experimental, analytical approach was employed in this study. Two circular knitting machines were used to produce the fabrics, one with a 24 inches gauge and the other with a 28 inches gauge. Both fabrics were knitted with three different loop lengths (3.05 mm, 2.9 mm, and 2.6 mm) to obtain loose, medium, and tight fabrics for evaluation. Various comfort properties, including air permeability, water vapor permeability, wickability, and thermal resistance, were measured for both fabric types. Findings: The study found a significant difference in comfort properties between tencel/cotton single jersey fabric and cotton single jersey fabric. Tencel/cotton fabric exhibited higher air permeability, water vapor permeability, and wickability compared to cotton fabric. These findings suggest that tencel fabric is more suitable for summer clothes due to its superior ventilation and absorption properties. Theoretical Importance: This study contributes to the exploration of alternative fabrics to cotton by evaluating their comfort properties. By identifying fabrics that offer better comfort properties than cotton, particularly in terms of water usage, the study provides valuable insights into sustainable fabric choices for the fashion industry. Data Collection and Analysis Procedures: The comfort properties of the fabrics were measured using appropriate testing methods. Paired comparison t-tests were conducted to determine the significant differences between tencel/cotton fabric and cotton fabric in the measured properties. Correlation coefficients were also calculated to examine the relationships between the factors under study. Question Addressed: The study addresses the question of whether tencel/cotton single jersey fabric can serve as an alternative to cotton fabric for summer clothes, considering their comfort properties. Conclusion: The study concludes that tencel/cotton single jersey fabric offers superior comfort properties compared to cotton single jersey fabric, making it a suitable alternative for summer clothes. The findings also highlight the importance of considering fabric properties, such as air permeability, water vapor permeability, and wickability, when selecting materials for garments to enhance wearer comfort. This research contributes to the search for sustainable alternatives to cotton and provides valuable insights for the fashion industry in making informed fabric choices.Keywords: comfort properties, cotton fabric, tencel fabric, single jersey
Procedia PDF Downloads 771225 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 2361224 Trafficking of Women in Assam: The Untold Violation of Women's Human Rights
Authors: Mridula Devi
Abstract:
Trafficking of women is a slur on human dignity and a shameful act to human civilization and development. Trafficking of women is one of worst brazen abuses which violate the women’s human rights. In India, more particularly in Assam, human trafficking and infringement of human rights of individual includes mainly the women and girl child of the State. Trafficking in North East region of India, more particularly in Assam occurs in two different ways – one is the internal trafficking of women and girl child from conflict affected rural areas of Assam for domestic work and prostitution. Secondly, there is trafficking of women to other south-East Asiatic countries like Bangladesh, Bhutan, Bangkok, Myanmar (Burma) for various purposes such as drug trafficking, labor, bar girl and prostitution.Historically, trafficking in human beings is associated with slavery and bonded or forced labor. Since the period of Roman Civilization, there was the practice of traffic in persons in the form of slave trade among the nations. With the rise of new imperialism, slavery had become an integral part of the colonial system of European Countries. With time, it almost became synonymous with prostitution or commercial sexual exploitation. Finally, the United Nation adopted the Convention for the Suppression of the Traffic in Persons and of the Prostitution of others, 1949 by the G.A.Res.No.-317(iv). The Convention totally denounces the traffic in persons for the purpose of prostitution. However, it is important to note that, now a days trafficking is not confined to commercial sexual exploitation of women and children alone. It has myriad forms and the number of victims has been steadily on the rise over the past few decades. In Assam, it takes place through and for marriage, sexual exploitation, begging, organ trading, militancy conflicts, drug padding and smuggling, labour, adoption, entertainment, and sports. In this paper, empirical methodology has been used. The study is based on primary and secondary sources. Data’s are collected from different books, publications, newspaper, journals etc. For empirical analysis, some random samples are collected and systematized for better result. India suffers from the ignominy of being one of the biggest hubs of women trafficking in the world. Over the years, Assam: the north east part of India has been bearing the brunt of the rapidly rising evil of trafficking of women which threaten the life, dignity and human rights of women. Though different laws are adopted at international and national level to restore trafficking, still the menace of trafficking of women in Assam is not decreased, rather it increased. This causes a serious violation of women’s human right in Assam. Human trafficking or women’s trafficking is a serious crime against society. To curb this in Assam it is required to take some effective and dedicated measure at state level as well as national and international level.Keywords: Assam, human trafficking, sexual exploitation, India
Procedia PDF Downloads 5161223 Initializing E-Classroom in a Multigrade School in the Philippines
Authors: Karl Erickson I. Ebora
Abstract:
Science and technology are two inseparable terms which bring wonders to all aspects of life such as education, medicine, food production and even the environment. In education, technology has become an integral part as it brings many benefits to the teaching-learning process. However, in the Philippines, being one of the developing countries resources are scarce and not all schools enjoy the fruits brought by technology. Much of this ordeal impacts that of multigrade instruction. These schools are often the last priority in resources allocation since these have limited number of students. In fact, it is not surprising that these schools do not have even a single computer unit much more a computer laboratory. This paper sought to present a plan on how public schools would receive its e-classroom. Specifically, this paper sought to answer questions like the level of the school readiness in terms of facilities and equipment; the attitude of the respondents towards the use of e-classroom; level of teacher’s familiarity in using different e-classroom software and the plans of interventions undertaken by the school to make it e-classroom ready. After gathering and analysing the necessary data, this paper came up with the following conclusions that in terms of facilities and equipment, Guisguis Talon Elementary School (Main), though a multigrade school, is ready to receive e-classroom.; that the respondents show positive disposition in technology utilization in teaching after they strongly agree that technology plays essential role in the teaching-learning process. Also, they strongly agree that technology is a good motivator; it makes the teaching and learning more interesting and effective; it makes teaching easy; and that technology enhances student’s learning. Additionally, Teacher-respondents in Guisguis Talon Elementary School (Main) show familiarity in using software. They are very familiar with MS Word; MS Excel; MS PowerPoint; and internet and email. Moreover, they are very familiar with basic e-classroom computer operations and basic application software. They are very familiar with MS office and can do simple editing and formatting; in accessing and saving information from CD/DVD, external hard drives, USB and the like; and in browsing effectively different search engines and educational sites, download and upload files. Likewise respondents strongly agree to the interventions undertaken by the school to make it e-classroom ready. They strongly agree that funding and support are needed by the school; that stakeholders should be encouraged to consider donating of equipment; and that school and community should try to mobilize their resources in order to help the school; that the teachers should be provided with trainings in order for them to be technologically competent; and that principals and administrators should motivate their teachers to undergo continuous professional development.Keywords: e-classroom, multi-grade school, DCP, classroom computers
Procedia PDF Downloads 2021222 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 771221 An Assessment of Digital Platforms, Student Online Learning, Teaching Pedagogies, Research and Training at Kenya College of Accounting University
Authors: Jasmine Renner, Alice Njuguna
Abstract:
The booming technological revolution is driving a change in the mode of delivery systems especially for e-learning and distance learning in higher education. The report and findings of the study; an assessment of digital platforms, student online learning, teaching pedagogies, research and training at Kenya College of Accounting University (hereinafter 'KCA') was undertaken as a joint collaboration project between the Carnegie African Diaspora Fellowship and input from the staff, students and faculty at KCA University. The participants in this assessment/research met for selected days during a six-week period during which, one-one consultations, surveys, questionnaires, foci groups, training, and seminars were conducted to ascertain 'online learning and teaching, curriculum development, research and training at KCA.' The project was organized into an eight-week project workflow with each week culminating in project activities designed to assess digital online teaching and learning at KCA. The project also included the training of distance learning instructors at KCA and the evaluation of KCA’s distance platforms and programs. Additionally, through a curriculum audit and redesign, the project sought to enhance the curriculum development activities related to of distance learning at KCA. The findings of this assessment/research represent the systematic deliberate process of gathering, analyzing and using data collected from DL students, DL staff and lecturers and a librarian personnel in charge of online learning resources and access at KCA. We engaged in one-on-one interviews and discussions with staff, students, and faculty and collated the findings to inform practices that are effective in the ongoing design and development of eLearning earning at KCA University. Overall findings of the project led to the following recommendations. First, there is a need to address infrastructural challenges that led to poor internet connectivity for online learning, training needs and content development for faculty and staff. Second, there is a need to manage cultural impediments within KCA; for example fears of vital change from one platform to another for effectiveness and Institutional goodwill as a vital promise of effective online learning. Third, at a practical and short-term level, the following recommendations based on systematic findings of the research conducted were as follows: there is a need for the following to be adopted at KCA University to promote the effective adoption of online learning: a) an eLearning compatible faculty lab, b) revision of policy to include an eLearn strategy or strategic management, c) faculty and staff recognitions engaged in the process of training for the adoption and implementation of eLearning and d) adequate website resources on eLearning. The report and findings represent a comprehensive approach to a systematic assessment of online teaching and learning, research and training at KCA.Keywords: e-learning, digital platforms, student online learning, online teaching pedagogies
Procedia PDF Downloads 1941220 The Correspondence between Self-regulated Learning, Learning Efficiency and Frequency of ICT Use
Authors: Maria David, Tunde A. Tasko, Katalin Hejja-Nagy, Laszlo Dorner
Abstract:
The authors have been concerned with research on learning since 1998. Recently, the focus of our interest is how prevalent use of information and communication technology (ICT) influences students' learning abilities, skills of self-regulated learning and learning efficiency. Nowadays, there are three dominant theories about the psychic effects of ICT use: According to social optimists, modern ICT devices have a positive effect on thinking. As to social pessimists, this effect is rather negative. And, regarding the views of biological optimists, the change is obvious, but these changes can fit into the mankind's evolved neurological system as did writing long ago. Mentality of 'digital natives' differ from that of elder people. They process information coming from the outside world in an other way, and different experiences result in different cerebral conformation. In this regard, researchers report about both positive and negative effects of ICT use. According to several studies, it has a positive effect on cognitive skills, intelligence, school efficiency, development of self-regulated learning, and self-esteem regarding learning. It is also proven, that computers improve skills of visual intelligence such as spacial orientation, iconic skills and visual attention. Among negative effects of frequent ICT use, researchers mention the decrease of critical thinking, as permanent flow of information does not give scope for deeper cognitive processing. Aims of our present study were to uncover developmental characteristics of self-regulated learning in different age groups and to study correlations of learning efficiency, the level of self-regulated learning and frequency of use of computers. Our subjects (N=1600) were primary and secondary school students and university students. We studied four age groups (age 10, 14, 18, 22), 400 subjects of each. We used the following methods: the research team developed a questionnaire for measuring level of self-regulated learning and a questionnaire for measuring ICT use, and we used documentary analysis to gain information about grade point average (GPA) and results of competence-measures. Finally, we used computer tasks to measure cognitive abilities. Data is currently under analysis, but as to our preliminary results, frequent use of computers results in shorter response time regarding every age groups. Our results show that an ordinary extent of ICT use tend to increase reading competence, and had a positive effect on students' abilities, though it didn't show relationship with school marks (GPA). As time passes, GPA gets worse along with the learning material getting more and more difficult. This phenomenon draws attention to the fact that students are unable to switch from guided to independent learning, so it is important to consciously develop skills of self-regulated learning.Keywords: digital natives, ICT, learning efficiency, reading competence, self-regulated learning
Procedia PDF Downloads 3631219 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 2121218 Ethical Artificial Intelligence: An Exploratory Study of Guidelines
Authors: Ahmad Haidar
Abstract:
The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI
Procedia PDF Downloads 961217 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches
Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang
Abstract:
Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis
Procedia PDF Downloads 4041216 Keeping Education Non-Confessional While Teaching Children about Religion
Authors: Tünde Puskás, Anita Andersson
Abstract:
This study is part of a research project about whether religion is considered as part of Swedish cultural heritage in Swedish preschools. Our aim in this paper is to explore how a Swedish preschool balance between keeping the education non-confessional and at the same time teaching children about a particular tradition, Easter.The paper explores how in a Swedish preschool with a religious profile teachers balance between keeping education non-confessional and teaching about a tradition with religious roots. The point of departure for the theoretical frame of our study is that practical considerations in pedagogical situations are inherently dilemmatic. The dilemmas that are of interest for our study evolve around formalized, intellectual ideologies, such us multiculturalism and secularism that have an impact on everyday practice. Educational dilemmas may also arise in the intersections of the formalized ideology of non-confessionalism, prescribed in policy documents and the common sense understandings of what is included in what is understood as Swedish cultural heritage. In this paper, religion is treated as a human worldview that, similarly to secular ideologies, can be understood as a system of thought. We make use of Ninian Smart's theoretical framework according to which in modern Western world religious and secular ideologies, as human worldviews, can be studied from the same analytical framework. In order to be able to study the distinctive character of human worldviews Smart introduced a multi-dimensional model within which the different dimensions interact with each other in various ways and to different degrees. The data for this paper is drawn from fieldwork carried out in 2015-2016 in the form of video ethnography. The empirical material chosen consists of a video recording of a specific activity during which the preschool group took part in an Easter play performed in the local church. The analysis shows that the policy of non-confessionalism together with the idea that teaching covering religious issues must be purely informational leads in everyday practice to dilemmas about what is considered religious. At the same time what the adults actually do with religion fulfills six of seven dimensions common to religious traditions as outlined by Smart. What we can also conclude from the analysis is that whether it is religion or a cultural tradition that is thought through the performance the children watched in the church depends on how the concept of religion is defined. The analysis shows that the characters of the performance themselves understood religion as the doctrine of Jesus' resurrection from the dead. This narrow understanding of religion enabled them indirectly to teach about the traditions and narratives surrounding Easter while avoiding teaching religion as a belief system.Keywords: non-confessional education, preschool, religion, tradition
Procedia PDF Downloads 1601215 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc
Abstract:
Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.Keywords: numerical model, additive manufacturing, friction, process
Procedia PDF Downloads 1481214 Sustainability of Small Tourism Enterprises: A Comparison of Homestays and Independent Businesses from Ghalegaon and Ghandruk of the Annapurna Conservation Area, Nepal
Authors: Baikuntha Prasad Acharya, Elizabeth Halpenny
Abstract:
Small tourism enterprises (STEs) are primary providers of services and attractions in many destinations of less developed countries; they are considered the lifeblood of tourism sector. Furthermore, in rural community destinations of such countries including Nepal, STEs are regarded as alternative tools for advancing economic and sociocultural transformations. Many families in rural Nepali destinations are venturing into small tourism entrepreneurship so that their poverty can be reduced and they can live a sustained life. Most these communities are utilizing their lifestyles and natural and cultural heritages as tourism attractions. This study aimed to understand the sustainability of the STEs in rural destinations by synthesizing observations from Ghalegaon and Ghandruk of the Annapurna Conservation Area in western Nepal. Ghalegaon has community-based homestays and Ghandruk has independently owned and operated small tourism businesses such as cafes, tea houses, lodges, guest houses, and hotels, etc. The community-based homestays of Ghalegaon are compared with the independently owned and operated STEs of Ghandruk. The data were collected through multiple sources: 1) survey of tourists (n=112) and households (n=191); 2) interviews (n=14) with the locals, 3) group discussions (n=10) with different local groups including that of regional tourism players, experts and policy makers, 4) observations, and 5) document analysis. The STEs of both communities were first analyzed by understanding their level of sustainability as businesses, and then were explored how they were impacting on respective communities’ sustainability. The survey indicators and guidelines for interviews and group discussions were adapted to the Nepalese context based on four pillars of sustainability: economic, social, cultural and environmental; an additional dimension of management was also included, particularly for the STEs. The findings have shown a weaker economic and management dimensions of Ghalegaon’s Homestay than that of Ghandruk’s STEs. Some interesting social complexities of rural tourism and entrepreneurship were also revealed. This study’s findings do not much resonate to what Nepal government’s current rural tourism strategies that have been envisioned and prioritized for, particularly that the rural homestay tourism opportunities enhance inclusiveness of women and other deprived communities by spreading the benefits to the grassroots level. The study has highlighted several important applied implications to the local tourism management committees, tourism operators and associations, and regional and national tourism authorities. Further studies are advisable in other similar contexts in Nepal and in other countries to see whether there are variances in the findings.Keywords: Nepal, rural tourism communities, small tourism enterprises, sustainability
Procedia PDF Downloads 3351213 Reservoir-Triggered Seismicity of Water Level Variation in the Lake Aswan
Authors: Abdel-Monem Sayed Mohamed
Abstract:
Lake Aswan is one of the largest man-made reservoirs in the world. The reservoir began to fill in 1964 and the level rose gradually, with annual irrigation cycles, until it reached a maximum water level of 181.5 m in November 1999, with a capacity of 160 km3. The filling of such large reservoir changes the stress system either through increasing vertical compressional stress by loading and/or increased pore pressure through the decrease of the effective normal stress. The resulted effect on fault zones changes stability depending strongly on the orientation of pre-existing stress and geometry of the reservoir/fault system. The main earthquake occurred on November 14, 1981, with magnitude 5.5. This event occurred after 17 years of the reservoir began to fill, along the active part of the Kalabsha fault and located not far from the High Dam. Numerous of small earthquakes follow this earthquake and continue till now. For this reason, 13 seismograph stations (radio-telemetry network short-period seismometers) were installed around the northern part of Lake Aswan. The main purpose of the network is to monitor the earthquake activity continuously within Aswan region. The data described here are obtained from the continuous record of earthquake activity and lake-water level variation through the period from 1982 to 2015. The seismicity is concentrated in the Kalabsha area, where there is an intersection of the easterly trending Kalabsha fault with the northerly trending faults. The earthquake foci are distributed in two seismic zones, shallow and deep in the crust. Shallow events have focal depths of less than 12 km while deep events extend from 12 to 28 km. Correlation between the seismicity and the water level variation in the lake provides great suggestion to distinguish the micro-earthquakes, particularly, those in shallow seismic zone in the reservoir–triggered seismicity category. The water loading is one factor from several factors, as an activating medium in triggering earthquakes. The common factors for all cases of induced seismicity seem to be the presence of specific geological conditions, the tectonic setting and water loading. The role of the water loading is as a supplementary source of earthquake events. So, the earthquake activity in the area originated tectonically (ML ≥ 4) and the water factor works as an activating medium in triggering small earthquakes (ML ≤ 3). Study of the inducing seismicity from the water level variation in Aswan Lake is of great importance and play great roles necessity for the safety of the High Dam body and its economic resources.Keywords: Aswan lake, Aswan seismic network, seismicity, water level variation
Procedia PDF Downloads 3731212 Teaching of Entrepreneurship and Innovation in Brazilian Universities
Authors: Marcelo T. Okano, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi
Abstract:
Teaching of entrepreneurship and innovation in Brazilian universities has increased in recent years due to several factors such as the emergence of disciplines like biotechnology increased globalization reduced basic funding and new perspectives on the role of the university in the system of knowledge production Innovation is increasingly seen as an evolutionary process that involves different institutional spheres or sectors in society Entrepreneurship is a milestone on the road towards economic progress, and makes a huge contribution towards the quality and future hopes of a sector, economy or even a country. Entrepreneurship is as important in small and medium-sized enterprises (SMEs) and local markets as in large companies, and national and international markets, and is just as key a consideration for public companies as or private organizations. Entrepreneurship helps to encourage the competition in the current environment that leads to the effects of globalization. There is an increasing tendency for government policy to promote entrepreneurship for its apparent economic benefit. Accordingly, governments seek to employ entrepreneurship education as a means to stimulate increased levels of economic activity. Entrepreneurship education and training (EET) is growing rapidly in universities and colleges throughout the world, and governments are supporting it both directly and through funding major investments in advice-provision to would-be entrepreneurs and existing small businesses. The Triple Helix of university–industry–government relations is compared with alternative models for explaining the current research system in its social contexts. Communications and negotiations between institutional partners generate an overlay that increasingly reorganizes the underlying arrangements. To achieve the objective of this research was a survey of the literature on the entrepreneurship and innovation and then a field research with 100 students of Fatec. To collect the data needed for analysis, we used the exploratory research of a qualitative nature. We asked to respondents what degree of knowledge over ten related to entrepreneurship and innovation topics, responses were answered in a Likert scale with 4 levels, none, small, medium and large. We can conclude that the terms such as entrepreneurship and innovation are known by most students because the university propagates them across disciplines, lectures, and institutes innovation. The more specific items such as canvas and Design thinking model are unknown by most respondents. The importance of the University in teaching innovation and entrepreneurship in the transmission of this knowledge to the students in order to equalize the knowledge. As a future project, these items will be re-evaluated to create indicators for measuring the knowledge level.Keywords: Brazilian universities, entrepreneurship, innovation, entrepreneurship, globalization
Procedia PDF Downloads 5081211 A Qualitative Study of Inclusive Growth through Microfinance in India
Authors: Amit Kumar Bardhan, Barnali Nag, Chandra Sekhar Mishra
Abstract:
Microfinance is considered as one of the key drivers of financial inclusion and pro-poor financial growth. Microfinance in India became popular through Self Help Group (SHG) movement initiated by NABARD. In terms of outreach and loan portfolio, SHG Bank Linkage programme (SHG-BLP) has emerged as the largest microfinance initiative in the world. The success of financial inclusion lies in the successful implementation of SHG-BLP. SHGs are generally promoted by social welfare organisations like NGOs, welfare societies, government agencies, Co-operatives etc. and even banks are also involved in SHG formation. Thus, the pro-poor implementation of the scheme largely depends on the credibility of the SHG Promoting Institutions (SHPIs). The rural poor lack education, skills and financial literacy and hence need continuous support and proper training right from planning to implementation. In this study, we have made an attempt to inspect the reasons behind low penetration of SHG financing to the poorest of the poor both from demand and supply side perspective. Banks, SHPIs, and SHGs are three key essential stakeholders in SHG-BLP programmes. All of them have a vital role in programme implementation. The objective of this paper is to find out the drivers and hurdles in the path of financial inclusion through SHG-BLP and the role of SHPIs in reaching out to the ultra poor. We try to address questions like 'what are the challenges faced by SHPIs in targeting the poor?' and, 'what are factors behind the low credit linkage of SHGs?' Our work is based on a qualitative study of SHG programmes in semi-urban towns in the states of West Bengal and Odisha in India. Data are collected through unstructured questionnaire and in-depth interview from the members of SHGs, SHPIs and designated banks. The study provides some valuable insights about the programme and a comprehensive view of problems and challenges faced by SGH, SHPIs, and banks. On the basis of our understanding from the survey, some findings and policy recommendations that seem relevant are: increasing level of non-performing assets (NPA) of commercial banks and wilful default in expectation of loan waiver and subsidy are the prime reasons behind low rate of credit linkage of SHGs. Regular changes in SHG schemes and no incentive for after linkage follow up results in dysfunctional SHGs. Government schemes are mostly focused on creation of SHG and less on livelihood promotion. As a result, in spite of increasing (YoY) trend of number of SHGs promoted, there is no real impact on welfare growth. Government and other SHPIs should focus on resource based SHG promotion rather only increasing the number of SHGs.Keywords: financial inclusion, inclusive growth, microfinance, Self-Help Group (SHG), Self-Help Group Promoting Institution (SHPI)
Procedia PDF Downloads 2181210 Immune Disregulation in Inflammatory Skin Diseases with Comorbid Metabolic Disorders
Authors: Roman Khanferyan, Levon Gevorkyan, Ivan Radysh
Abstract:
Skin barrier dysfunction induces multiple inflammatory skin diseases. Epidemiological studies clearly support the link between most dermatological pathologies, immune disorders and metabolic disorders. Among them most common are psoriasis (PS) and Atopic dermatitis (AD). Psoriasis is a chronic immune-mediated inflammatory skin disease that affects 1.5 to 3.0% of the world's population. Comorbid metabolic disorders play an important role in the progression of PS and AD, as well. It is well known that PS, AD and overweight/obesity are associated with common pathophysiological mechanisms of mild chronic inflammation. The goal of the study was to study the immune disturbances in patients with PS, AD and comorbid metabolic disorders. To study the prevalence of comorbidity of PS and AD (data from 1406 patient’s histories of diseases) were analyzed. The severity of the disease is assessed using the PASI index (Psoriasis Area and Severity Index). 59 patients with psoriasis of different localizations of lesions and severity, as well as with different body mass index (BMI), were examined. The determination of the concentration of pro-inflammatory cytokines (IL-6, IL-8, IFNγ, IL-17, L-18 and TNFa) and chemokines (RANTES, IP-10, MCP-1 and Eotaxin) in sera and supernatants of 48h-cultivated peripheral blood mononuclear cell (PBMC) of psoriasis patients and healthy volunteers (36 adults) have been carried out by multiplex assay (Luminex Corporation, USA). It has been demonstrated that 42% of PS patients had comorbidity with different types of atopies. The most common was bronchial asthma and allergic rhinitis. At the same time, the prevalence of AD in PS patients was determined in 8.7% of patients. It has been shown that serum levels of all studied cytokines (IL-6, IL-8, IFNγ, IL-17, L-18 and TNF) in most of the studied patients were higher in PS patients than in those with AD and healthy controls (p<0.05). An in vitro synthesis of the IL-6 and IFNγ by PBMC demonstrated similar results to those determined in blood sera. There was a high correlation between BMI, immune mediators and the concentrations of adipokines and chemokines (p<0.05). The concentrations of Leptin and Resistin in obese psoriatic patients were greater by 28.6% and 17%, respectively, compared to non-obese psoriatic patients. In obese patients with psoriasis the serum levels of adiponectin were decreased up to 1.3-fold. The mean serum RANTES, IP-10, MCP-1, EOTAXIN levels in obese psoriatic patients were decreased by up to 13.1%, 21.9%, 40.4% and 28.2%, respectively. Similar results have been demonstrated in AD patients with comorbid overweight and obesity. Thus, the study demonstrated the important role of cytokines and chemokines dysregulation in inflammatory skin diseases, especially in patients with comorbid obesity and overweight. Metabolic disorders promote the severity of PS and AD, highly increase immune dysregulation, and synthesis of adipokines, which correlates with the production of proinflammatory immune mediators in comorbid obesity and overweight.Keywords: psoriasis, atopic dermatitis, pro-inflammatory cytokines, chemokines, comorbid obesity
Procedia PDF Downloads 401209 Spare Part Carbon Footprint Reduction with Reman Applications
Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş
Abstract:
Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability
Procedia PDF Downloads 831208 Specific Earthquake Ground Motion Levels That Would Affect Medium-To-High Rise Buildings
Authors: Rhommel Grutas, Ishmael Narag, Harley Lacbawan
Abstract:
Construction of high-rise buildings is a means to address the increasing population in Metro Manila, Philippines. The existence of the Valley Fault System within the metropolis and other nearby active faults poses threats to a densely populated city. The distant, shallow and large magnitude earthquakes have the potential to generate slow and long-period vibrations that would affect medium-to-high rise buildings. Heavy damage and building collapse are consequences of prolonged shaking of the structure. If the ground and the building have almost the same period, there would be a resonance effect which would cause the prolonged shaking of the building. Microzoning the long-period ground response would aid in the seismic design of medium to high-rise structures. The shear-wave velocity structure of the subsurface is an important parameter in order to evaluate ground response. Borehole drilling is one of the conventional methods of determining shear-wave velocity structure however, it is an expensive approach. As an alternative geophysical exploration, microtremor array measurements can be used to infer the structure of the subsurface. Microtremor array measurement system was used to survey fifty sites around Metro Manila including some municipalities of Rizal and Cavite. Measurements were carried out during the day under good weather conditions. The team was composed of six persons for the deployment and simultaneous recording of the microtremor array sensors. The instruments were laid down on the ground away from sewage systems and leveled using the adjustment legs and bubble level. A total of four sensors were deployed for each site, three at the vertices of an equilateral triangle with one sensor at the centre. The circular arrays were set up with a maximum side length of approximately four kilometers and the shortest side length for the smallest array is approximately at 700 meters. Each recording lasted twenty to sixty minutes. From the recorded data, f-k analysis was applied to obtain phase velocity curves. Inversion technique is applied to construct the shear-wave velocity structure. This project provided a microzonation map of the metropolis and a profile showing the long-period response of the deep sedimentary basin underlying Metro Manila which would be suitable for local administrators in their land use planning and earthquake resistant design of medium to high-rise buildings.Keywords: earthquake, ground motion, microtremor, seismic microzonation
Procedia PDF Downloads 4691207 Coping with Incompatible Identities in Russia: Case of Orthodox Gays
Authors: Siuzan Uorner
Abstract:
The era of late modernity is characterized, on the one hand, by social disintegration, values of personal freedom, tolerance, and self-expression. Boundaries between the accessible and the elitist, normal and abnormal are blurring. On the other hand, traditional social institutions, such as religion (especially Russian Orthodox Church), exist, criticizing lifestyle and worldview other than conventionally structured canons. Despite the declared values and opportunities in late modern society, people's freedom is ambivalent. Personal identity and its aspects are becoming a subject of choice. Hence, combinations of identity aspects can be incompatible. Our theoretical framework is based on P. Ricoeur's concept of narrative identity and hermeneutics, E. Goffman’s theory of social stigma, self-presentation, discrepant roles and W. James lectures about varieties of religious experience. This paper aims to reconstruct ways of coping with incompatible identities of Orthodox gays (an extreme sampling of a combination of sexual orientation and religious identity in a heteronormative society). This study focuses on the discourse of Orthodox gay parishioners and ROC gay priests in Russia (sampling ‘hard to reach’ populations because of the secrecy of gay community in ROC and sensitivity of the topic itself). We conducted a qualitative research design, using in-depth personal semi-structured online-interviews. Recruiting of informants took place in 'Nuntiare et Recreare' (Russian movement of religious LGBT) page in VKontakte through the post with an invitation to participate in the research. In this work, we analyzed interview transcripts using axial coding. We chose the Grounded Theory methodology to construct a theory from empirical data and contribute to the growing body of knowledge in ways of harmonizing incompatible identities in late modern societies. The research has found that there are two types of conflicts Orthodox gays meet with: canonic contradictions (postulates of Scripture and its interpretations) and problems in social interaction, mainly with ROC priests and Orthodox parishioners. We have revealed semantic meanings of most commonly used words that appear in the narratives (words such as ‘love’, ‘sin’, ‘religion’ etc.). Finally, we have reconstructed biographical patterns of LGBT social movements’ involvement. This paper argues that all incompatibilities are harmonizing in the narrative itself. As Ricoeur has suggested, the narrative configuration allows the speaker to gather facts and events together and to compose causal relationships between them. Sexual orientation and religious identity are getting along and harmonizing in the narrative.Keywords: gay priests, incompatible identities, narrative identity, Orthodox gays, religious identity, ROC, sexual orientation
Procedia PDF Downloads 1391206 Simple and Effective Method of Lubrication and Wear Protection
Authors: Buddha Ratna Shrestha, Jimmy Faivre, Xavier Banquy
Abstract:
By precisely controlling the molecular interactions between anti-wear macromolecules and bottle-brush lubricating molecules in the solution state, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. Our results provide rational guides to design such fluids for virtually any type of surfaces. The lowest friction coefficient and the maximum pressure that it can sustain is 5*10-3 and 2.5 MPa which is close to the physiological pressure. Lubricating and protecting surfaces against wear using liquid lubricants is a great technological challenge. Until now, wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface while lubrication was provided by a lubricating fluid. Hence, we here research for a simple, effective and applicable solution to the above problem using surface force apparatus (SFA). SFA is a powerful technique with sub-angstrom resolution in distance and 10 nN/m resolution in interaction force while performing friction experiment. Thus, SFA is used to have the direct insight into interaction force, material and friction at interface. Also, we always know the exact contact area. From our experiments, we found that by precisely controlling the molecular interactions between anti-wear macromolecules and lubricating molecules, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. The lowest friction coefficient and the maximum pressure that it can sustain in our system is 5*10-3 and 2.5 GPA which is well above the physiological pressure. Our results provide rational guides to design such fluids for virtually any type of surfaces. Most importantly this process is simple, effective and applicable method of lubrication and protection as until now wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface. Currently, the frictional data that are obtained while sliding the flat mica surfaces are compared and confirmed that a particular mixture of solution was found to surpass all other combination. So, further we would like to confirm that the lubricating and antiwear protection remains the same by performing the friction experiments in synthetic cartilages.Keywords: bottle brush polymer, hyaluronic acid, lubrication, tribology
Procedia PDF Downloads 2641205 Harsh Discipline and Later Disruptive Behavior Disorder in Two Contexts
Authors: Olga Santesteban, Glorisa Canino, Hector R. Bird, Cristiane S. Duarte
Abstract:
Objective: To address whether harsh discipline is associated with disruptive behavior disorders (DBD) in Puerto Rican children over time. Background: Both cross-sectional and longitudinal studies report that rates of DBD vary by gender, age and other demographics, being more frequent among boys, later in life and among those who live in urban areas. Also, the literature supports the direct, positive association between harsh discipline and externalizing behaviors. Nevertheless, scholars have underscored the important role of race and ethnicity in understanding discipline effects on children. The impact of harsh discipline in a Puerto Rican population remains to be studied. Methods: Sample: This is a secondary analysis of the Boricua Youth Study which assessed yearly (3 times) Puerto Rican children aged 5-15 in two different sites: San Juan (Puerto Rico) and the South Bronx (NY), N=2951. Participants that did not have scores of harsh discipline in the 3 waves were excluded for this analysis (N=2091). Main Measures: a) Harsh Discipline (Parent report) was measured using 6 items from the “Parental Discipline Scale” that measures various forms of punishment, including physical and verbal abuse, and withholding affection; b) Disruptive Behavior Disorder (Parent report): Parent version of the Diagnostic Interview Schedule for Children-IV (DISC-IV) was used to asses children’s conduct disorders; c) Demographic factors: Child gender, child age, family income, marital status; d) Parental factors: parental psychopathology, parental monitoring, familism, parent support; e) Children characteristics: Controlling for any diagnostic at wave 1 (internalizing or externalizing). Data Analysis: Logistic regression was carried out relating the likelihood of DBD to harsh discipline along waves controlling for potential confounders as demographics, child and parent characteristics. Results: There were no significant differences in harsh discipline by site in wave 1 and wave 2 but there was a significant difference in wave 3. Also, there were no significant differences in DBD by site in wave 1 and wave 2 but there was a significant difference in wave 3. There was a significant difference of discipline by gender and age in all the waves. We calculated unadjusted (OR) and adjusted (AOD) and 95% confidence intervals (95%CI) showing the relation between harsh discipline at wave 1 and the presence of child disruptive behavior disorder at wave 3 for both South Bronx and Puerto Rico. There was an association between harsh discipline and the likelihood of having DBD in The Bronx (AOR=1.76; 95%CI=1.13-2.74, p.013) and in Puerto Rico (AOR=2.17; 95%CI=1.28-3.67, p.004) having controlled for demographic, parental and individual factors. Conclusions: Context may be an important differential factor shaping the potential risk of harsh discipline toward DBD for Puerto Rican children.Keywords: disruptive behavior disorders, harsh discipline, puerto rican, psychological education
Procedia PDF Downloads 474