Search results for: individual entrepreneurial behavior
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10569

Search results for: individual entrepreneurial behavior

2019 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density

Authors: Lalit Kumar, Rashid Al Shidi

Abstract:

Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.

Keywords: dubas bug, date palm, tree density, infestation levels

Procedia PDF Downloads 193
2018 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty

Authors: Ammar Y. Alqahtani

Abstract:

In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.

Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics

Procedia PDF Downloads 137
2017 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 323
2016 Tip-Apex Distance as a Long-Term Risk Factor for Hospital Readmission Following Intramedullary Fixation of Intertrochanteric Fractures

Authors: Brandon Knopp, Matthew Harris

Abstract:

Purpose: Tip-apex distance (TAD) has long been discussed as a metric for determining risk of failure in the fixation of peritrochanteric fractures. TAD measurements over 25 millimeters (mm) have been associated with higher rates of screw cut out and other complications in the first several months after surgery. However, there is limited evidence for the efficacy of this measurement in predicting the long-term risk of negative outcomes following hip fixation surgery. The purpose of our study was to investigate risk factors including TAD for hospital readmission, loss of pre-injury ambulation and development of complications within 1 year after hip fixation surgery. Methods: A retrospective review of proximal hip fractures treated with single screw intramedullary devices between 2016 and 2020 was performed at a 327-bed regional medical center. Patients included had a postoperative follow-up of at least 12 months or surgery-related complications developing within that time. Results: 44 of the 67 patients in this study met the inclusion criteria with adequate follow-up post-surgery. There was a total of 10 males (22.7%) and 34 females (77.3%) meeting inclusion criteria with a mean age of 82.1 (± 12.3) at the time of surgery. The average TAD in our study population was 19.57mm and the average 1-year readmission rate was 15.9%. 3 out of 6 patients (50%) with a TAD > 25mm were readmitted within one year due to surgery-related complications. In contrast, 3 out of 38 patients (7.9%) with a TAD < 25mm were readmitted within one year due to surgery-related complications (p=0.0254). Individual TAD measurements, averaging 22.05mm in patients readmitted within 1 year of surgery and 19.18mm in patients not readmitted within 1 year of surgery, were not significantly different between the two groups (p=0.2113). Conclusions: Our data indicate a significant improvement in hospital readmission rates up to one year after hip fixation surgery in patients with a TAD < 25mm with a decrease in readmissions of over 40% (50% vs 7.9%). This result builds upon past investigations by extending the follow-up time to 1 year after surgery and utilizing hospital readmissions as a metric for surgical success. With the well-documented physical and financial costs of hospital readmission after hip surgery, our study highlights a reduction of TAD < 25mm as an effective method of improving patient outcomes and reducing financial costs to patients and medical institutions. No relationship was found between TAD measurements and secondary outcomes, including loss of pre-injury ambulation and development of complications.

Keywords: hip fractures, hip reductions, readmission rates, open reduction internal fixation

Procedia PDF Downloads 145
2015 An Interactive User-Oriented Approach to Optimizing Public Space Lighting

Authors: Tamar Trop, Boris Portnov

Abstract:

Public Space Lighting (PSL) of outdoor urban areas promotes comfort, defines spaces and neighborhood identities, enhances perceived safety and security, and contributes to residential satisfaction and wellbeing. However, if excessive or misdirected, PSL leads to unnecessary energy waste and increased greenhouse gas emissions, poses a non-negligible threat to the nocturnal environment, and may become a potential health hazard. At present, PSL is designed according to international, regional, and national standards, which consolidate best practice. Yet, knowledge regarding the optimal light characteristics needed for creating a perception of personal comfort and safety in densely populated residential areas, and the factors associated with this perception, is still scarce. The presented study suggests a paradigm shift in designing PSL towards a user-centered approach, which incorporates pedestrians' perspectives into the process. The study is an ongoing joint research project between China and Israel Ministries of Science and Technology. Its main objectives are to reveal inhabitants' perceptions of and preferences for PSL in different densely populated neighborhoods in China and Israel, and to develop a model that links instrumentally measured parameters of PSL (e.g., intensity, spectra and glare) with its perceived comfort and quality, while controlling for three groups of attributes: locational, temporal, and individual. To investigate measured and perceived PSL, the study employed various research methods and data collection tools, developed a location-based mobile application, and used multiple data sources, such as satellite multi-spectral night-time light imagery, census statistics, and detailed planning schemes. One of the study’s preliminary findings is that higher sense of safety in the investigated neighborhoods is not associated with higher levels of light intensity. This implies potential for energy saving in brightly illuminated residential areas. Study findings might contribute to the design of a smart and adaptive PSL strategy that enhances pedestrians’ perceived safety and comfort while reducing light pollution and energy consumption.

Keywords: energy efficiency, light pollution, public space lighting, PSL, safety perceptions

Procedia PDF Downloads 133
2014 Electrophoretic Deposition of Ultrasonically Synthesized Nanostructured Conducting Poly(o-phenylenediamine)-Co-Poly(1-naphthylamine) Film for Detection of Glucose

Authors: Vaibhav Budhiraja, Chandra Mouli Pandey

Abstract:

The ultrasonic synthesis of nanostructured conducting copolymer is an effective technique to synthesize polymer with desired chemical properties. This tailored nanostructure, shows tremendous improvement in sensitivity and stability to detect a variety of analytes. The present work reports ultrasonically synthesized nanostructured conducting poly(o-phenylenediamine)-co-poly(1-naphthylamine) (POPD-co-PNA). The synthesized material has been characterized using Fourier transform infrared spectroscopy (FTIR), ultraviolet-visible spectroscopy, transmission electron microscopy, X-ray diffraction and cyclic voltammetry. FTIR spectroscopy confirmed random copolymerization, while UV-visible studies reveal the variation in polaronic states upon copolymerization. High crystallinity was achieved via ultrasonic synthesis which was confirmed by X-ray diffraction, and the controlled morphology of the nanostructures was confirmed by transmission electron microscopy analysis. Cyclic voltammetry shows that POPD-co-PNA has rather high electrochemical activity. This behavior was explained on the basis of variable orientations adopted by the conducting polymer chains. The synthesized material was electrophoretically deposited at onto indium tin oxide coated glass substrate which is used as cathode and parallel platinum plate as the counter electrode. The fabricated bioelectrode was further used for detection of glucose by crosslinking of glucose oxidase in the PODP-co-PNA film. The bioelectrode shows a surface-controlled electrode reaction with the electron transfer coefficient (α) of 0.72, charge transfer rate constant (ks) of 21.77 s⁻¹ and diffusion coefficient 7.354 × 10⁻¹⁵ cm²s⁻¹.

Keywords: conducting, electrophoretic, glucose, poly (o-phenylenediamine), poly (1-naphthylamine), ultrasonic

Procedia PDF Downloads 142
2013 Numerical Aeroacoustics Investigation of Eroded and Coated Leading Edge of NACA 64- 618 Airfoil

Authors: Zeinab Gharibi, B. Stoevesandt, J. Peinke

Abstract:

Long term surface erosion of wind turbine blades, especially at the leading edge, impairs aerodynamic performance; therefore, lowers efficiency of the blades mostly in the high-speed rotor tip regions. Blade protection provides significant improvements in annual energy production, reduces costly downtime, and protects the integrity of the blades. However, this protection still influences the aerodynamic behavior, and broadband noise caused by interaction between the impinging turbulence and blade’s leading edge. This paper presents an extensive numerical aeroacoustics approach by investigating the sound power spectra of the eroded and coated NACA 64-618 wind turbine airfoil and evaluates aeroacoustics improvements after the protection procedure. Using computational fluid dynamics (CFD), different quasi 2D numerical grids were implemented and special attention was paid to the refinement of the boundary layers. The noise sources were captured and decoupled with acoustic propagation via the derived formulation of Curle’s analogy implemented in OpenFOAM. Therefore, the noise spectra were compared for clean, coated and eroded profiles in the range of chord-based Reynolds number (1.6e6 ≤ Re ≤ 11.5e6). Angle of attack was zero in all cases. Verifications were conducted for the clean profile using available experimental data. Sensitivity studies for the far-field were done on different observational positions. Furthermore, beamforming studies were done simulating an Archimedean spiral microphone array for far-field noise directivity patterns. Comparing the noise spectra of the coated and eroded geometries, results show that, coating clearly improves aerodynamic and acoustic performance of the eroded airfoil.

Keywords: computational fluid dynamics, computational aeroacoustics, leading edge, OpenFOAM

Procedia PDF Downloads 223
2012 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises

Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska

Abstract:

Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.

Keywords: safety climate, occupational health, civil engineering, productivity

Procedia PDF Downloads 318
2011 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana

Authors: Jemima C. A. Sumboh

Abstract:

Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.

Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage

Procedia PDF Downloads 137
2010 Synthesis of Porphyrin-Functionalized Beads for Flow Cytometry

Authors: William E. Bauta, Jennifer Rebeles, Reggie Jacob

Abstract:

Porphyrins are noteworthy in biomedical science for their cancer tissue accumulation and photophysical properties. The preferential accumulation of some porphyrins in cancerous tissue has been known for many years. This, combined with their characteristic photophysical and photochemical properties, including their strong fluorescence and their ability to generate reactive oxygen species in vivo upon laser irradiation, has led to much research into the application of porphyrins as cancer diagnostic and therapeutic agents. Porphyrins have been used as dyes to detect cancer cells both in vivo and, less commonly, in vitro. In one example, human sputum samples from lung cancer patients and patients without the disease were dissociated and stained with the porphyrin TCPP (5,10,15,20-tetrakis-(4-carboxyphenyl)-porphine). Cells were analyzed by flow cytometry. Cancer samples were identified by their higher TCPP fluorescence intensity relative to the no-cancer controls. However, quantitative analysis of fluorescence in cell suspensions stained with multiple fluorophores requires particles stained with each of the individual fluorophores as controls. Fluorescent control particles must be compatible in size with flow cytometer fluidics and have favorable hydrodynamic properties in suspension. They must also display fluorescence comparable to the cells of interest and be stable upon storage amine-functionalized spherical polystyrene beads in the 5 to 20-micron diameter range that was reacted with TCPP and EDC in aqueous pH six buffer overnight to form amide bonds. Beads were isolated by centrifugation and tested by flow cytometry. The 10-micron amine-functionalized beads displayed the best combination of fluorescence intensity and hydrodynamic properties, such as lack of clumping and remaining in suspension during the experiment. These beads were further optimized by varying the stoichiometry of EDC and TCPP relative to the amine. The reaction was accompanied by the formation of a TCPP-related particulate, which was removed, after bead centrifugation, using a microfiltration process. The resultant TCPP-functionalized beads were compatible with flow cytometry conditions and displayed a fluorescence comparable to that of stained cells, which allowed their use as fluorescence standards. The beads were stable in refrigerated storage in the dark for more than eight months. This work demonstrates the first preparation of porphyrin-functionalized flow cytometry control beads.

Keywords: tetraaryl porphyrin, polystyrene beads, flow cytometry, peptide coupling

Procedia PDF Downloads 93
2009 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 371
2008 Patient Experience in a Healthcare Setting: How Patients' Encounters Make for Better Value Co-creation

Authors: Kingsley Agyapong

Abstract:

Research conducted in recent years has delved into the concept of patient-perceived value within the context of co-creation, particularly in the realm of doctor-patient interactions within healthcare settings. However, existing scholarly discourse lacks exploration regarding the emergence of patient-derived value in the co-creation process, specifically within encounters involving patients and stakeholders such as doctors, nurses, pharmacists, and other healthcare professionals. This study aims to fill this gap by elucidating the perspectives of patients regarding the value they derive from their interactions with multiple stakeholders in the delivery of healthcare services. The fieldwork was conducted at a university clinic located in Ghana. Data collection procedures involved conducting 20 individual interviews with key informants on distinct value accrued from co-creation practices and interactions with stakeholders. The key informants consisted of patients receiving care at the university clinic during the Malaria Treatment Process. Three themes emerged from both the existing literature and the empirical data collected. The first theme, labeled as "patient value needs in co-creation," encapsulates elements such as communication effectiveness, interpersonal interaction quality, treatment efficacy, and enhancements to the overall quality of life experienced by patients during their interactions with healthcare professionals. The second theme, designated as "services that enhance patients' experience in value co-creation," pertains to patients' perceptions of services that contribute favourably to co-creation experiences, including initiatives related to health promotion and the provision of various in-house services that patients deem pertinent for augmenting their overall experiences. The third theme, titled "Challenges in the co-creation of patients' value," delineates obstacles encountered within the co-creation process, including health professionals' challenges in effectively following up with patients scheduled for review and prolonged waiting times for healthcare delivery. This study contributes to the patients' perceptions of value within the co-creation process during their interactions with service providers, particularly healthcare professionals. By gaining a deeper insight into this process, healthcare providers can enhance the delivery of patient-centered care, thereby leading to improved healthcare outcomes. The study further offers managerial implications derived from its findings, providing actionable insights for healthcare managers and policymakers aiming to optimize patient value creation in healthcare services. Furthermore, it suggests avenues for future research endeavors within healthcare settings.

Keywords: patient, healthcare, co-creation, malaria

Procedia PDF Downloads 47
2007 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups

Authors: Sakshi Bhalla

Abstract:

On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.

Keywords: communication, emoji, language, Twitter

Procedia PDF Downloads 95
2006 Antigen Stasis can Predispose Primary Ciliary Dyskinesia (PCD) Patients to Asthma

Authors: Nadzeya Marozkina, Joe Zein, Benjamin Gaston

Abstract:

Introduction: We have observed that many patients with Primary Ciliary Dyskinesia (PCD) benefit from asthma medications. In healthy airways, the ciliary function is normal. Antigens and irritants are rapidly cleared, and NO enters the gas phase normally to be exhaled. In the PCD airways, however, antigens, such as Dermatophagoides, are not as well cleared. This defect leads to oxidative stress, marked by increased DUOX1 expression and decreased superoxide dismutase [SOD] activity (manuscript under revision). H₂O₂, in high concentrations in the PCD airway, injures the airway. NO is oxidized rather than being exhaled, forming cytotoxic peroxynitrous acid. Thus, antigen stasis on PCD airway epithelium leads to airway injury and may predispose PCD patients to asthma. Indeed, recent population genetics suggest that PCD genes may be associated with asthma. We therefore hypothesized that PCD patients would be predisposed to having asthma. Methods. We analyzed our database of 18 million individual electronic medical records (EMRs) in the Indiana Network for Patient Care research database (INPCR). There is not an ICD10 code for PCD itself; code Q34.8 is most commonly used clinically. To validate analysis of this code, we queried patients who had an ICD10 code for both bronchiectasis and situs inversus totalis in INPCR. We also studied a validation cohort using the IBM Explorys® database (over 80 million individuals). Analyses were adjusted for age, sex and race using a 1 PCD: 3 controls matching method in INPCR and multivariable logistic regression in the IBM Explorys® database. Results. The prevalence of asthma ICD10 codes in subjects with a code Q34.8 was 67% vs 19% in controls (P < 0.0001) (Regenstrief Institute). Similarly, in IBM*Explorys, the OR [95% CI] for having asthma if a patient also had ICD10 code 34.8, relative to controls, was =4.04 [3.99; 4.09]. For situs inversus alone the OR [95% CI] was 4.42 [4.14; 4.71]; and bronchiectasis alone the OR [95% CI] =10.68 (10.56; 10.79). For both bronchiectasis and situs inversus together, the OR [95% CI] =28.80 (23.17; 35.81). Conclusions: PCD causes antigen stasis in the human airway (under review), likely predisposing to asthma in addition to oxidative and nitrosative stress and to airway injury. Here, we show that, by several different population-based metrics, and using two large databases, patients with PCD appear to have between a three- and 28-fold increased risk of having asthma. These data suggest that additional studies should be undertaken to understand the role of ciliary dysfunction in the pathogenesis and genetics of asthma. Decreased antigen clearance caused by ciliary dysfunction may be a risk factor for asthma development.

Keywords: antigen, PCD, asthma, nitric oxide

Procedia PDF Downloads 106
2005 Navigating through Uncertainty: An Explorative Study of Managers’ Experiences in China-foreign Cooperative Higher Education

Authors: Qian Wang, Haibo Gu

Abstract:

To drive practical interpretations and applications of various policies in building the transnational education joint-ventures, middle managers learn to navigate through uncertainties and ambiguities. However, the current literature views very little about those middle managers’ experiences, perceptions, and practices. This paper takes the empirical approach and aims to uncover the middle managers’ experiences by conducting interviews, campus visits, and document analysis. Following the qualitative research method approach, the researchers gathered information from a mixture of fourteen foreign and Chinese managers. Their perceptions of the China-foreign cooperation in higher education and their perceived roles have offered important, valuable insights to this group of people’s attitudes and management performances. The diverse cultural and demographic backgrounds contributed to the significance of the study. There are four key findings. One, middle managers’ immediate micro-contexts and individual attitudes are the top two influential factors in managers’ performances. Two, the foreign middle managers showed a stronger sense of self-identity in risk-taking. Three, the Chinese middle managers preferred to see difficulties as part of their assigned responsibilities. Four, middle managers in independent universities demonstrated a stronger sense of belonging and fewer frustrations than middle managers in secondary institutes. The researchers propose that training for managers in a transnational educational setting should consider these discoveries when select fitting topics and content. In particular, middle managers should be better prepared to anticipate their everyday jobs in the micro-environment; hence, information concerning sponsor organizations’ working culture is as essential as knowing the national and local regulations, and socio-culture. Different case studies can help the managers to recognize and celebrate the diversity in transnational education. Situational stories can help them to become aware of the diverse and wide range of work contexts so that they will not feel to be left alone when facing challenges without relevant previous experience or training. Though this research is a case study based in the Chinese transnational higher education setting, the implications could be relevant and comparable to other transnational higher education situations and help to continue expanding the potential applications in this field.

Keywords: educational management, middle manager performance, transnational higher education

Procedia PDF Downloads 163
2004 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 63
2003 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data

Authors: Minjuan Sun

Abstract:

Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.

Keywords: credit score, digital footprint, Fintech, machine learning

Procedia PDF Downloads 161
2002 FE Modelling of Structural Effects of Alkali-Silica Reaction in Reinforced Concrete Beams

Authors: Mehdi Habibagahi, Shami Nejadi, Ata Aminfar

Abstract:

A significant degradation factor that impacts the durability of concrete structures is the alkali-silica reaction. Engineers are frequently charged with the challenges of conducting a thorough safety assessment of concrete structures that have been impacted by ASR. The alkali-silica reaction has a major influence on the structural capacities of structures. In most cases, the reduction in compressive strength, tensile strength, and modulus of elasticity is expressed as a function of free expansion and crack widths. Predicting the effect of ASR on flexural strength is also relevant. In this paper, a nonlinear three-dimensional (3D) finite-element model was proposed to describe the flexural strength degradation induced byASR.Initial strains, initial stresses, initial cracks, and deterioration of material characteristics were all considered ASR factors in this model. The effects of ASR on structural performance were evaluated by focusing on initial flexural stiffness, force–deformation curve, and load-carrying capacity. Degradation of concrete mechanical properties was correlated with ASR growth using material test data conducted at Tech Lab, UTS, and implemented into the FEM for various expansions. The finite element study revealed a better understanding of the ASR-affected RC beam's failure mechanism and capacity reduction as a function of ASR expansion. Furthermore, in this study, decreasing of the residual mechanical properties due to ASRisreviewed, using as input data for the FEM model. Finally, analysis techniques and a comparison of the analysis and the experiment results are discussed. Verification is also provided through analyses of reinforced concrete beams with behavior governed by either flexural or shear mechanisms.

Keywords: alkali-silica reaction, analysis, assessment, finite element, nonlinear analysis, reinforced concrete

Procedia PDF Downloads 159
2001 For a Poetic Clinic: Experimentations at Risk on the Images in Performances

Authors: Juliana Bom-Tempo

Abstract:

The proposed composition occurs between images, performances, clinics and philosophies. For this enterprise we depart for what is not known beforehand, so with a question as a compass: "would it be in the creation, production and implementation of images in a performance a 'when' for the event of a poetic clinic?” In light of this, there are, in order to think a 'when' of the event of a poetic clinic, images in performances created, produced and executed in partnerships with the author of this text. Faced with this composition, we built four indicators to find spatiotemporal coordinates that would spot that "when", namely: risk zones; the mobilizations of the signs; the figuring of the flesh and an education of the affections. We dealt with the images in performances; Crútero; Flesh; Karyogamy and the risk of abortion; Egg white; Egg-mouth; Islands, threads, words ... germs; Egg-Mouth-Debris, taken as case studies, by engendering risks areas to promote individuations, which never actualize thoroughly, thus always something of pre-individual and also individuating a environment; by mobilizing the signs territorialized by the ordinary, causing them to vary the language and the words of order dictated by the everyday in other compositions of sense, other machinations; by generating a figure of flesh, disarranging the bodies, isolating them in the production of a ground force that causes the body to leak out and undo the functionalities of the organs; and, finally, by producing an education of affections, by placing the perceptions in becoming and disconnecting the visible in the production of small deserts that call for the creation of a people yet to come. The performance is processed as a problematizing of the images fixed by the ordinary, producing gestures that precipitate the individuation of images in performance, strange to the configurations that gather bodies and spaces in what we call common. Lawrence proposes to think of "people" who continually use umbrellas to protect themselves from chaos. These have the function of wrapping up the chaos in visions that create houses, forms and stabilities; they paint a sky at the bottom of the umbrella, where people march and die. A chaos, where people live and wither. Pierce the umbrella for a desire of chaos; a poet puts himself as an enemy of the convention, to be able to have an image of chaos and a little sun that burns his skin. The images in performances presented, thereby, were moving in search for the power of producing a spatio-temporal "when" putting the territories in risk areas, mobilizing the signs that format the day-to-day, opening the bodies to a disorganization and the production of an education of affections for the event of a poetic clinic.

Keywords: Experimentations , Images in Performances, Poetic Clinic, Risk

Procedia PDF Downloads 114
2000 The Comparison of Personality Background of Volunteer and Non-Volunteer Subjects

Authors: Laszlo Dorner

Abstract:

Background: In the last few decades there has been a significant discussion within the researchers of prosocial behavior about to what extent personality characteristics matter in determining the quality and frequency of helping behaviors. Of these community activities the most important is formal volunteering which mainly realises in civil services and organizations. Recently many researches have been showed up regarding the personality factors and motivations behind volunteering). Most of these researches found strong correlation between Agreeableness and Extraversion as global traits and the time spent on volunteering and its frequency as well. Aims of research: In this research we investigate the relation between formal volunteer activities and global traits in a Hungarian volunteer sample. We hypothetise that the results appeared in the previous researches show the same pattern in Hungary as well: volunteering would be related to Agreeableness and Extraversion. We also assume that the time spent on volunteering is related to these traits, since these traits would serve as an indicator of long-term volunteering. Methods: We applied the Hungarian adaptation of Big Five Questionnaire created by Caprara, Barbaranelli és Borgogni. This self-reported questionnaire contains 132 items, and explore 5 main traits examining the person’s most important emotional and motivational features regarding its personality. This research took into account the most important socio-economical factors (age, gender, religiosity, income) which can determine volunteer activities per se. The data is evaluated by SPSS 19.0 Statistical Software. Sample: 92 volunteer (formal, mainly the volunteers of Hungarian Red Cross and Hospice Organizations)and 92 non volunteer person, with matched subsamples by the factors of age, gender and qualification. Results: The volunteer subsample shows higher values of Energy and significantly higher values of Agreeableness and Openness, however, regarding Conscientiousness and Emotional Stability the differences are not significant between the volunteer and non-volunteer subsamples.

Keywords: Big Five, comparative analysis, global traits, volunteering

Procedia PDF Downloads 350
1999 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 80
1998 Developing a Decision-Making Tool for Prioritizing Green Building Initiatives

Authors: Tayyab Ahmad, Gerard Healey

Abstract:

Sustainability in built environment sector is subject to many development constraints. Building projects are developed under different requirements of deliverables which makes each project unique. For an owner organization, i.e., a higher-education institution, involved in a significant building stock, it is important to prioritize some of the sustainability initiatives over the others in order to align the sustainable building development with organizational goals. The point-based green building rating tools i.e. Green Star, LEED, BREEAM are becoming increasingly popular and are well-acknowledged worldwide for verifying a sustainable development. It is imperative to synthesize a multi-criteria decision-making tool that can capitalize on the point-based methodology of rating systems while customizing the sustainable development of building projects according to the individual requirements and constraints of the client organization. A multi-criteria decision-making tool for the University of Melbourne is developed that builds on the action-learning and experience of implementing Green Buildings at the University of Melbourne. The tool evaluates the different sustainable building initiatives based on the framework of Green Star rating tool of Green Building Council of Australia. For each different sustainability initiative the decision-making tool makes an assessment based on at least five performance criteria including the ease with which a sustainability initiative can be achieved and the potential of a sustainability initiative to enhance project objectives, reduce life-cycle costs, enhance University’s reputation, and increase the confidence in quality construction. The use of a weighted aggregation mathematical model in the proposed tool can have a considerable role in the decision-making process of a Green Building project by indexing the Green Building initiatives in terms of organizational priorities. The index value of each initiative will be based on its alignment with some of the key performance criteria. The usefulness of the decision-making tool is validated by conducting structured interviews with some of the key stakeholders involved in the development of sustainable building projects at the University of Melbourne. The proposed tool is realized to help a client organization in deciding that within limited resources which sustainability initiatives and practices are more important to be pursued than others.

Keywords: higher education institution, multi-criteria decision-making tool, organizational values, prioritizing sustainability initiatives, weighted aggregation model

Procedia PDF Downloads 234
1997 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 242
1996 Impact of Geomagnetic Variation over Sub-Auroral Ionospheric Region during High Solar Activity Year 2014

Authors: Arun Kumar Singh, Rupesh M. Das, Shailendra Saini

Abstract:

The present work is an attempt to evaluate the sub-auroral ionospheric behavior under changing space weather conditions especially during high solar activity year 2014. In view of this, the GPS TEC along with Ionosonde data over Indian permanent scientific base 'Maitri', Antarctica (70°46′00″ S, 11°43′56″ E) has been utilized. The results suggested that the nature of ionospheric responses to the geomagnetic disturbances mainly depended upon the status of high latitudinal electro-dynamic processes along with the season of occurrence. Fortunately, in this study, both negative and positive ionospheric impact to the geomagnetic disturbances has been observed in a single year but in different seasons. The study reveals that the combination of equator-ward plasma transportation along with ionospheric compositional changes causes a negative ionospheric impact during summer and equinox seasons. However, the combination of pole-ward contraction of the oval region along with particle precipitation may lead to exhibiting positive ionospheric response during the winter season. Other than this, some Ionosonde based new experimental evidence also provided clear evidence of particle precipitation deep up to the low altitudinal ionospheric heights, i.e., up to E-layer by the sudden and strong appearance of E-layer at 100 km altitudes. The sudden appearance of E-layer along with a decrease in F-layer electron density suggested the dominance of NO⁺ over O⁺ at a considered region under geomagnetic disturbed condition. The strengthening of E-layer is responsible for modification of auroral electrojet and field-aligned current system. The present study provided a good scientific insight on sub-auroral ionospheric to the changing space weather condition.

Keywords: high latitude ionosphere, space weather, geomagnetic storms, sub-storm

Procedia PDF Downloads 169
1995 Occasional Word-Formation in Postfeminist Fiction: Cognitive Approach

Authors: Kateryna Nykytchenko

Abstract:

Modern fiction and non-fiction writers commonly use their own lexical and stylistic devices to capture a reader’s attention and bring certain thoughts and feelings to his reader. Among such devices is the appearance of one of the neologic notions – individual author’s formations: occasionalisms or nonce words. To a significant extent, the host of examples of new words occurs in chick lit genre which has experienced exponential growth in recent years. Chick Lit is a new-millennial postfeminist fiction which focuses primarily on twenty- to thirtysomething middle-class women. It brings into focus the image of 'a new woman' of the 21st century who is always fallible, funny. This paper aims to investigate different types of occasional word-formation which reflect cognitive mechanisms of conveying women’s perception of the world. Chick lit novels of Irish author Marian Keyes present genuinely innovative mixture of forms, both literary and nonliterary which is displayed in different types of occasional word-formation processes such as blending, compounding, creative respelling, etc. Crossing existing mental and linguistic boundaries, adopting herself to new and overlapping linguistic spaces, chick lit author creates new words which demonstrate the result of development and progress of language and the relationship between language, thought and new reality, ultimately resulting in hybrid word-formation (e.g. affixation or pseudoborrowing). Moreover, this article attempts to present the main characteristics of chick-lit fiction genre with the help of the Marian Keyes’s novels and their influence on occasionalisms. There has been a lack of research concerning cognitive nature of occasionalisms. The current paper intends to account for occasional word-formation as a set of interconnected cognitive mechanisms, operations and procedures meld together to create a new word. The results of the generalized analysis solidify arguments that the kind of new knowledge an occasionalism manifests is inextricably linked with cognitive procedure underlying it, which results in corresponding type of word-formation processes. In addition, the findings of the study reveal that the necessity of creating occasionalisms in postmodern fiction novels arises from the need to write in a new way keeping up with a perpetually developing world, and thus the evolution of the speaker herself and her perception of the world.

Keywords: Chick Lit, occasionalism, occasional word-formation, cognitive linguistics

Procedia PDF Downloads 181
1994 Psychological Well Being of Female Prisoners

Authors: Sujata Gupta Kedar, J. N. Tulika

Abstract:

Early researchers suggested that imprisonment had negative psychological and physical effects on its inmates, leading to psychological deterioration. The term “prisons” in the Consensus Statement of WHO is intended to denote, as those institutions which hold people who have been sentenced to a period of imprisonment by the courts for offences against the law. Thus “prisons” if local circumstances justify it, may also be taken to include secure institutions holding on a compulsory basis on any of the following categories of people: remand prisoners; civil prisoners; juvenile detainees; immigration detainees; some categories of mentally disordered patients; asylum seekers; refugees; people detained pending expulsion, deportation, exile, exclusion or any other form of compulsory transfer to other countries or areas of the country; people detained in police cells; and any other compulsorily detained group. Prisons are aimed to cure the criminal and their behavior but their records are not encouraging. Instead the imprisonment affects all prisoners in different way. From withstanding the shock of entry to the new culture, which is very different from their own, prisoners must try to determine how to spend the time in prison, since the hours appears to be endless in prisons. There is also the fear of deterioration. This article aims to provide an overview of the psychological well being of female prisoners in the prison environment in five areas- satisfaction, efficiency, sociability, mental health and interpersonal relations. Research was done on two different types of imprisonment- under trial prisoner and convict. Total sample included 22 female prisoners of Nagaon Special Jail of Assam. The instrument used for the study was based on Psychological Well Being Scale. Statistical analysis was done with t-test and one way anova test. The result demonstrated that there is no significant difference in the psychological wellbeing of female prisoners in the prison and that there is no significant difference in the psychological well being of different types of female prisoners involved in different crimes but there is significant difference in the mental health of the female prisoners in prison.

Keywords: psychological effect, female prisoners, prison, well being of prisoners

Procedia PDF Downloads 408
1993 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings

Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian

Abstract:

Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.

Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM

Procedia PDF Downloads 110
1992 Effect of Nanostructure on Hydrogen Embrittlement Resistance of the Severely Deformed 316LN Austenitic Steel

Authors: Frank Jaksoni Mweta, Nozomu Adachi, Yoshikazu Todaka, Hirokazu Sato, Yuta Sato, Hiromi Miura, Masakazu Kobayashi, Chihiro Watanabe, Yoshiteru Aoyagi

Abstract:

Advances in the consumption of hydrogen fuel increase demands of high strength steel pipes and storage tanks. However, high strength steels are highly sensitive to hydrogen embrittlement. Because the introduction of hydrogen into steel during the fabrication process or from the environment is unavoidable, it is essential to improve hydrogen embrittlement resistance of high strength steels through microstructural control. In the present study, the heterogeneous nanostructure with a tensile strength of about 1.8 GPa and the homogeneous nanostructure with a tensile strength of about 2.0 GPa of 316LN steels were generated after 92% heavy cold rolling and high-pressure torsion straining, respectively. The heterogeneous nanostructure is composed of twin domains, shear bands, and lamellar grains. The homogeneous nanostructure is composed of uniformly distributed ultrafine nanograins. The influence of heterogeneous and homogenous nanostructures on the hydrogen embrittlement resistance was investigated. The specimen for each nanostructure was electrochemically charged with hydrogen for 3, 6, 12, and 24 hours, respectively. Under the same hydrogen charging time, both nanostructures show almost the same concentration of the diffusible hydrogen based on the thermal desorption analysis. The tensile properties of the homogenous nanostructure were severely affected by the diffusible hydrogen. However, the diffusible hydrogen shows less impact on the tensile properties of the heterogeneous nanostructure. The difference in embrittlement behavior between the heterogeneous and homogeneous nanostructures was elucidated based on the mechanism of the cracks' growth observed in the tensile fractography. The hydrogen embrittlement was suppressed in the heterogeneous nanostructure because the twin domain became an obstacle for crack growth. The homogeneous nanostructure was not consisting an obstacle such as a twin domain; thus, the crack growth resistance was low in this nanostructure.

Keywords: diffusible hydrogen, heterogeneous nanostructure, homogeneous nanostructure, hydrogen embrittlement

Procedia PDF Downloads 124
1991 Investigation of Failure Mechanisms of Composite Laminates with Delamination and Repaired with Bolts

Authors: Shuxin Li, Peihao Song, Haixiao Hu, Dongfeng Cao

Abstract:

The interactive deformation and failure mechanisms, including local bucking/delamination propagation and global bucking, are investigated in this paper with numerical simulation and validation with experimental results. Three dimensional numerical models using ABAQUS brick elements combined with cohesive elements and contact elements are developed to simulate the deformation and failure characteristics of composite laminates with and without delamination under compressive loading. The zero-thickness cohesive elements are inserted on the possible path of delamination propagation, and the inter-laminate behavior is characterized by the mixed-mode traction-separation law. The numerical simulations identified the complex feature of interaction among local buckling and/or delamination propagation and final global bucking for composite laminates with delamination under compressive loading. Firstly there is an interaction between the local buckling and delamination propagation, i.e., local buckling induces delamination propagation, and then delamination growth further enhances the local buckling. Secondly, the interaction between the out-plan deformation caused by local buckling and the global bucking deformation results in final failure of the composite laminates. The simulation results are validated by the good agreement with the experimental results published in the literature. The numerical simulation validated with experimental results revealed that the degradation of the load capacity, in particular of the compressive strength of composite structures with delamination, is mainly attributed to the combined local buckling/delamination propagation effects. Consequently, a simple field-bolt repair approach that can hinder the local buckling and prevent delamination growth is explored. The analysis and simulation results demonstrated field-bolt repair could effectively restore compressive strength of composite laminates with delamination.

Keywords: cohesive elements, composite laminates, delamination, local and global bucking, field-bolt repair

Procedia PDF Downloads 120
1990 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study

Authors: D. M. Samartsev, A. G. Copping

Abstract:

As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.

Keywords: analysis, architecture, automation, design process, technology

Procedia PDF Downloads 104