Search results for: immunoenzyme techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6756

Search results for: immunoenzyme techniques

756 Mothers’ Experiences of Continuing Their Pregnancy after Prenatally Receiving a Diagnosis of Down Syndrome

Authors: Sevinj Asgarova

Abstract:

Within the last few decades, major advances in the field of prenatal testing have transpired yet little research regarding the experiences of mothers who chose to continue their pregnancies after prenatally receiving a diagnosis of Down Syndrome (DS) has been undertaken. Using social constructionism and interpretive description, this retrospective research study explores this topic from the point of view of the mothers involved and provides insight as to how the experience could be improved. Using purposive sampling, 23 mothers were recruited from British Columbia (n=11) and Ontario (n=12) in Canada. Data retrieved through semi-structured in-depth interviews were analyzed using inductive, constant comparative analysis, the major analytical techniques of interpretive description. Four primary phases emerged from the data analysis 1) healthcare professional-mothers communications, 2) initial emotional response, 3) subsequent decision-making and 4) an adjustment and reorganization of lifestyle to the preparation for the birth of the child. This study validates the individualized and contextualized nature of mothers’ decisions as influenced by multiple factors, with moral values/spiritual beliefs being significant. The mothers’ ability to cope was affected by the information communicated to them about their unborn baby’s diagnosis and the manner in which that information was delivered to them. Mothers used emotional coping strategies, dependent upon support from partners, family, and friends, as well as from other families who have children with DS. Additionally, they employed practical coping strategies, such as engaging in healthcare planning, seeking relevant information, and reimagining and reorganizing their lifestyle. Over time many families gained a sense of control over their situation and readjusted to the preparation for the birth of the child. Many mothers expressed the importance of maintaining positivity and hopefulness with respect to positive outcomes and opportunities for their children. The comprehensive information generated through this study will also provide healthcare professionals with relevant information to assist them in understanding the informational and emotional needs of these mothers. This should lead to an improvement in their practice and enhance their ability to intervene appropriately and effectively, better offering improved support to parents dealing with a diagnosis of DS for their child.

Keywords: continuing affected pregnancy, decision making, disability, down syndrome, eugenic social attitudes, inequalities, life change events, prenatal care, prenatal testing, qualitative research, social change, social justice

Procedia PDF Downloads 103
755 Sludge Marvel (Densification): The Ultimate Solution For Doing More With Less Effort!

Authors: Raj Chavan

Abstract:

At present, the United States is home to more than 14,000 Water Resource Recovery Facilities (WRRFs), of which approximately 35% have implemented nutrient limits of some kind. These WRRFs contribute 10 to 15% of the total nutrient burden to surface rivers in the United States and account for approximately 1% of total power demand and 2% of total greenhouse gas emissions (GHG). There are several factors that have influenced the development of densification technologies in the direction of more compact and energy-efficient nutrient removal processes. Prior to surface water discharge, existing facilities that necessitate capacity expansion or biomass densification for greater treatability within the same footprint are being subjected to stricter nutrient removal requirements. Densification of activated sludge as a method for nutrient removal and process intensification at WRRFs has garnered considerable attention in recent times. The biological processes take place within the aerobic sediment granules, which form the basis of the technology. The possibility of generating granular sludge through continuous (or conventional) activated sludge processes (CAS) or densification of biomass through the transfer of activated sludge flocs to a denser biomass aggregate as an exceptionally efficient intensification technique has generated considerable interest. This presentation aims to furnish attendees with a foundational comprehension of densification through the illustration of practical concerns and insights. The subsequent subjects will be deliberated upon. What are some potential techniques for producing and preserving densified granules? What processes are responsible for the densification of biological flocs? How do physical selectors contribute to the process of biological flocs becoming denser? What viable strategies exist for the management of densified biological flocs, and which design parameters of physical selection influence the retention of densified biological flocs? determining operational solutions for floc and granule customization in order to meet capacity and performance objectives? The answers to these pivotal questions will be derived from existing full-scale treatment facilities, bench-scale and pilot-scale investigations, and existing literature data. By the conclusion of the presentation, the audience will possess a fundamental comprehension of the densification concept and its significance in attaining effective effluent treatment. Additionally, case studies pertaining to the design and operation of densification procedures will be incorporated into the presentation.

Keywords: densification, intensification, nutrient removal, granular sludge

Procedia PDF Downloads 74
754 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 103
753 Camera Trapping Coupled With Field Sign Survey Reveal the Mammalian Diversity and Abundance at Murree-Kotli Sattian-Kahuta National Park, Pakistan

Authors: Shehnila Kanwal

Abstract:

Murree-Kotli Sattian-Kahta National Park (MKKNP) was declared in 2009. However, not much is known about the diversity and relative abundance of the mammalian fauna of this park. In the current study, we used field sign survey and infrared camera trapping techniques to get an insight into the diversity of mammalian species and their relative abundance. We conducted field surveys in different areas of the park at various elevations from April 2023 up to March 2024 to record the field signs (scats, pug marks etc.) of the mammals’ species; in addition, we deployed a total of 22 infrared trail camera traps in different areas of the park, for 116 nights. We obtained a total of 5201 photographs using camera trapping. Results of camera trapping coupled with field sign surveys confirmed the presence of a total of twenty-one different mammalian species (large, meso and small mammals) recorded in the study area. The common leopard was recorded at four different sites in the park, with an altitudinal range between 648m-1533m. Distribution of Asiatic jackal and a red fox was recorded positive at all the sites surveyed in the park with an altitudinal range between 498m-1287m and 433m-2049m, respectively. Leopard cats were recorded at two different sites within the altitudinal range between 498m-894m. Jungle cat was recorded at three sites within an altitudinal range between 498m-846. Asian palm civets and small Indian civets were both recorded at three sites. Grey mongoose and small Indian mongoose were recorded at four and three sites. We also collected a total of 75 scats of different mammal species in the park to further confirm their occurrence. For the Indian pangolin, we recorded three field burrows at two different sites. Diversity index (H’=2.369960) and species evenness (E=0.81995) were calculated. Analysis of data revealed that wild boar (Sus sucrofa) was the most abundant species in the park; most of the mammal species were found nocturnal; these remain active from dusk throughout the night, and some of them remain active at dawn time. Leopard and Asian palm civets were highly overlapping species in the study area. Their temporal activity pattern overlapped 61%. Barking deer and Indian crested porcupine were also found to be nocturnal species they remained active throughout the night.

Keywords: MKKNP, diversity, abundance, evenness, distribution, mammals, overlapped

Procedia PDF Downloads 19
752 Retrieving Iconometric Proportions of South Indian Sculptures Based on Statistical Analysis

Authors: M. Bagavandas

Abstract:

Introduction: South Indian stone sculptures are known for their elegance and history. They are available in large numbers in different monuments situated different parts of South India. These art pieces have been studied using iconography details, but this pioneering study introduces a novel method known as iconometry which is a quantitative study that deals with measurements of different parts of icons to find answers for important unanswered questions. The main aim of this paper is to compare iconometric measurements of the sculptures with canonical proportion to determine whether the sculptors of the past had followed any of the canonical proportions prescribed in the ancient text. If not, this study recovers the proportions used for carving sculptures which is not available to us now. Also, it will be interesting to see how these sculptural proportions of different monuments belonging to different dynasties differ from one another in terms these proportions. Methods and Materials: As Indian sculptures are depicted in different postures, one way of making measurements independent of size, is to decode on a suitable measurement and convert the other measurements as proportions with respect to the chosen measurement. Since in all canonical texts of Indian art, all different measurements are given in terms of face length, it is chosen as the required measurement for standardizing the measurements. In order to compare these facial measurements with measurements prescribed in Indian canons of Iconography, the ten facial measurements like face length, morphological face length, nose length, nose-to-chin length, eye length, lip length, face breadth, nose breadth, eye breadth and lip breadth were standardized using the face length and the number of measurements reduced to nine. Each measurement was divided by the corresponding face length and multiplied by twelve and given in angula unit used in the canonical texts. The reason for multiplying by twelve is that the face length is given as twelve angulas in the canonical texts for all figures. Clustering techniques were used to determine whether the sculptors of the past had followed any of the proportions prescribed in the canonical texts of the past to carve sculptures and also to compare the proportions of sculptures of different monuments. About one hundred twenty-seven stone sculptures from four monuments belonging to the Pallava, the Chola, the Pandya and the Vijayanagar dynasties were taken up for this study. These art pieces belong to a period ranging from the eighth to the sixteenth century A.D. and all of them adorning different monuments situated in different parts of Tamil Nadu State, South India. Anthropometric instruments were used for taking measurements and the author himself had measured all the sample pieces of this study. Result: Statistical analysis of sculptures of different centers of art from different dynasties shows a considerable difference in facial proportions and many of these proportions differ widely from the canonical proportions. The retrieved different facial proportions indicate that the definition of beauty has been changing from period to period and region to region.

Keywords: iconometry, proportions, sculptures, statistics

Procedia PDF Downloads 154
751 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique

Procedia PDF Downloads 594
750 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe

Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou

Abstract:

In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.

Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine

Procedia PDF Downloads 62
749 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.

Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models

Procedia PDF Downloads 67
748 Process Improvement and Redesign of the Immuno Histology (IHC) Lab at MSKCC: A Lean and Ergonomic Study

Authors: Samantha Meyerholz

Abstract:

MSKCC offers patients cutting edge cancer care with the highest quality standards. However, many patients and industry members do not realize that the operations of the Immunology Histology Lab (IHC) are the backbone for carrying out this mission. The IHC lab manufactures blocks and slides containing critical tissue samples that will be read by a Pathologist to diagnose and dictate a patient’s treatment course. The lab processes 200 requests daily, leading to the generation of approximately 2,000 slides and 1,100 blocks each day. Lab material is transported through labeling, cutting, staining and sorting manufacturing stations, while being managed by multiple techs throughout the space. The quality of the stain as well as wait times associated with processing requests, is directly associated with patients receiving rapid treatments and having a wider range of care options. This project aims to improve slide request turnaround time for rush and non-rush cases, while increasing the quality of each request filled (no missing slides or poorly stained items). Rush cases are to be filled in less than 24 hours, while standard cases are allotted a 48 hour time period. Reducing turnaround times enable patients to communicate sooner with their clinical team regarding their diagnosis, ultimately leading faster treatments and potentially better outcomes. Additional project goals included streamlining tech and material workflow, while reducing waste and increasing efficiency. This project followed a DMAIC structure with emphasis on lean and ergonomic principles that could be integrated into an evolving lab culture. Load times and batching processes were analyzed using process mapping, FMEA analysis, waste analysis, engineering observation, 5S and spaghetti diagramming. Reduction of lab technician movement as well as their body position at each workstation was of top concern to pathology leadership. With new equipment being brought into the lab to carry out workflow improvements, screen and tool placement was discussed with the techs in focus groups, to reduce variation and increase comfort throughout the workspace. 5S analysis was completed in two phases in the IHC lab, helping to drive solutions that reduced rework and tech motion. The IHC lab plans to continue utilizing these techniques to further reduce the time gap between tissue analysis and cancer care.

Keywords: engineering, ergonomics, healthcare, lean

Procedia PDF Downloads 223
747 Early Childhood Education and Learning Outcomes in Lower Primary Schools, Uganda

Authors: John Acire, Wilfred Lajul, Ogwang Tom

Abstract:

Using a qualitative research technique, this study investigates the influence of Early Childhood Education (ECE) on learning outcomes in lower primary schools in Gulu City, Uganda. The study, which is based on Vygotsky's sociocultural theory of human learning, fills gaps in the current literature on the influence of ECE on learning outcomes. The aims of the study include analyzing the state of learning outcomes, investigating ECE practices, and determining the influence of these practices on learning outcomes in lower primary schools. The findings highlight the critical significance of ECE in promoting children's overall development. Nursery education helps children improve their handwriting, reading abilities, and general cognitive development. Children who have received nursery education have improved their abilities to handle pencils, form letters, and engage in social interactions, highlighting the significance of fine motor skills and socializing. Despite the good elements, difficulties in implementing ECE practices were found, such as differences in teaching styles, financial limits, and potential weariness due to prolonged school hours. The study suggests focused interventions to improve the effectiveness of ECE practices, ensure their connection with educational goals and maximize their influence on children's development. The study's findings show that respondents agree on the importance of nursery education in supporting holistic development, socialization, language competency, and conceptual comprehension. Challenges in nursery education, such as differences in teaching techniques and insufficient resources, highlight the need for comprehensive measures to address these challenges. Furthermore, parental engagement in home learning activities was revealed as an important factor affecting early education outcomes. Children who were engaged at home performed better in lower primary, emphasizing the value of a supportive family environment. Finally, the report suggests measures to enhance parental participation, changes in teaching methods through retraining, and age-appropriate enrolment. Future studies might concentrate on the involvement of parents, ECE policy practice, and the influence of ECE teachers on lower primary school learning results. These ideas are intended to help create a more favorable learning environment by encouraging holistic development and preparing children for success in succeeding academic levels.

Keywords: early childhood education, learning outcomes in lower primary schools, early childhood education practices, how ECE practices influence learning outcomes in lower primary schools

Procedia PDF Downloads 41
746 Impact of Unconditional Cash Transfer Scheme on the Food Security Status of the Elderly in Ekiti State, Nigeria

Authors: R. O. Babatunde, O. M. Igbalajobi, F. Matambalya

Abstract:

Moderate economic growth in developing and emerging countries has led to improvement in the food consumption and nutrition situation in the last two decades. Nevertheless, about 870 million people, with a quarter of them from Sub-Saharan Africa, are still suffering from hunger worldwide. As part of measures to reduce the widespread poverty and hunger, cash transfer programmes are now being implemented in many countries of the world. While nationwide cash transfer schemes are few in Sub-Saharan Africa generally, the available ones are more concentrated in East and Southern Africa. Much of the available literature on social protection had focused on the poverty impact of cash transfer schemes at the household level, with the larger proportion originating from Latin America. On the contrary, much less empirical studies have been conducted on the poverty impact of cash transfer in Sub-Saharan Africa, let alone on the food security and nutrition impact. To fill this gap in knowledge, this paper examines the impact of cash transfer on food security in Nigeria. As a case study, the paper analysed the Ekiti State Cash Transfer Scheme (ECTS). ECTS is an unconditional transfer scheme which was established in 2011 to directly provide cash transfer to elderly persons aged 65 years and above in Ekiti State of Nigeria. Using survey data collected in 2013, we analysed the impact of the scheme on food availability and dietary diversity of the beneficiary households. Descriptive and Propensity Score Matching (PSM) techniques were used to estimate the Average Treatment Effect (ATE) and Average Treatment Effect on the Treated (ATT) among the beneficiary and control groups. Thereafter, a model to test for the impact of participation in the cash transfer scheme on calorie availability and dietary diversity was estimated. The results indicate that while households in the sample are clearly vulnerable, there were statistically significant differences between the beneficiary and control groups. For instance, monthly expenditure, calorie availability and dietary diversity were significantly larger among the beneficiary and consequently, the prevalence and depth of hunger were lower in the group. Econometric results indicate that the cash transfer has a positive and significant effect on food availability and dietary diversity in the households. Expanding the coverage of the present scheme to cover all eligible households in the country and incorporating cash transfer into a comprehensive hunger reduction policy will make it to have a greater impact at improving food security among the most vulnerable households in the country.

Keywords: calorie availability, cash transfers, dietary diversity, propensity score matching

Procedia PDF Downloads 384
745 Diselenide-Linked Redox Stimuli-Responsive Methoxy Poly(Ethylene Glycol)-b-Poly(Lactide-Co-Glycolide) Micelles for the Delivery of Doxorubicin in Cancer Cells

Authors: Yihenew Simegniew Birhan, Hsieh Chih Tsai

Abstract:

The recent advancements in synthetic chemistry and nanotechnology fostered the development of different nanocarriers for enhanced intracellular delivery of pharmaceutical agents to tumor cells. Polymeric micelles (PMs), characterized by small size, appreciable drug loading capacity (DLC), better accumulation in tumor tissue via enhanced permeability and retention (EPR) effect, and the ability to avoid detection and subsequent clearance by the mononuclear phagocyte (MNP) system, are convenient to improve the poor solubility, slow absorption and non-selective biodistribution of payloads embedded in their hydrophobic cores and hence, enhance the therapeutic efficacy of chemotherapeutic agents. Recently, redox-responsive polymeric micelles have gained significant attention for the delivery and controlled release of anticancer drugs in tumor cells. In this study, we synthesized redox-responsive diselenide bond containing amphiphilic polymer, Bi(mPEG-PLGA)-Se₂ from mPEG-PLGA, and 3,3'-diselanediyldipropanoic acid (DSeDPA) using DCC/DMAP as coupling agents. The successful synthesis of the copolymers was verified by different spectroscopic techniques. Above the critical micelle concentration, the amphiphilic copolymer, Bi(mPEG-PLGA)-Se₂, self-assembled into stable micelles. The DLS data indicated that the hydrodynamic diameter of the micelles (123.9 ± 0.85 nm) was suitable for extravasation into the tumor cells through the EPR effect. The drug loading content (DLC) and encapsulation efficiency (EE) of DOX-loaded micelles were found to be 6.61 wt% and 54.9%, respectively. The DOX-loaded micelles showed initial burst release accompanied by sustained release trend where 73.94% and 69.54% of encapsulated DOX was released upon treatment with 6mM GSH and 0.1% H₂O₂, respectively. The biocompatible nature of Bi(mPEG-PLGA)-Se₂ copolymer was confirmed by the cell viability study. In addition, the DOX-loaded micelles exhibited significant inhibition against HeLa cells (44.46%), at a maximum dose of 7.5 µg/mL. The fluorescent microscope images of HeLa cells treated with 3 µg/mL (equivalent DOX concentration) revealed efficient internalization and accumulation of DOX-loaded Bi(mPEG-PLGA)-Se₂ micelles in the cytosol of cancer cells. In conclusion, the intelligent, biocompatible, and the redox stimuli-responsive behavior of Bi(mPEG-PLGA)-Se₂ copolymer marked the potential applications of diselenide-linked mPEG-PLGA micelles for the delivery and on-demand release of chemotherapeutic agents in cancer cells.

Keywords: anticancer drug delivery, diselenide bond, polymeric micelles, redox-responsive

Procedia PDF Downloads 110
744 Relation of Consumer Satisfaction on Organization by Focusing on the Different Aspects of Buying Behavior

Authors: I. Gupta, N. Setia

Abstract:

Introduction. Buyer conduct is a progression of practices or examples that buyers pursue before making a buy. It begins when the shopper ends up mindful of a need or wish for an item, at that point finishes up with the buying exchange. Business visionaries can't generally simply shake hands with their intended interest group people and become more acquainted with them. Research is often necessary, so every organization primarily involves doing continuous research to understand and satisfy consumer needs pattern. Aims and Objectives: The aim of the present study is to examine the different behaviors of the consumer, including pre-purchase, purchase, and post-purchase behavior. Materials and Methods: In order to get results, face to face interview held with 80 people which comprise a larger part of female individuals having upper as well as middle-class status. The prime source of data collection was primary. However, the study has also used the theoretical contribution of many researchers in their respective field. Results: Majority of the respondents were females (70%) from the age group of 20-50. The collected data was analyzed through hypothesis testing statistical techniques such as correlation analysis, single regression analysis, and ANOVA which has rejected the null hypothesis that there is no relation between researching the consumer behavior at different stages and organizational performance. The real finding of this study is that simply focusing on the buying part isn't enough to gain profits and fame, however, understanding the pre, buy and post-buy behavior of consumer performs a huge role in organization success. The outcomes demonstrated that the organization, which deals with the three phases of research of purchasing conduct is able to establish a great brand image as compare to their competitors. Alongside, enterprises can observe customer conduct in a considerably more proficient manner. Conclusion: The analyses of consumer behavior presented in this study is an attempt to understand the factors affecting consumer purchasing behavior. This study has revealed that those corporations are more successful, which work on understanding buying behavior instead to just focus on the selling products. As a result, organizations perform good and grow rapidly because consumers are the one who can make or break the company. The interviews that were conducted face to face, clearly revealed that those organizations become at top-notch whom consumers are satisfied, not just with product but also with services of the company. The study is not targeting the particular class of audience; however, it brings out benefits to the masses, in particular to business organizations.

Keywords: consumer behavior, pre purchase, post purchase, consumer satisfaction

Procedia PDF Downloads 112
743 Osseointegration Outcomes Following Amputee Lengthening

Authors: Jason Hoellwarth, Atiya Oomatia, Anuj Chavan, Kevin Tetsworth, Munjed Al Muderis

Abstract:

Introduction: Percutaneous EndoProsthetic Osseointegration for Limbs (PEPOL) facilitates improved quality of life (QOL) and objective mobility for most amputees discontent with their traditional socket prosthesis (TSP) experience. Some amputees desiring PEPOL have residual bone much shorter than the currently marketed press-fit implant lengths of 14-16 cm, potentially a risk for failure to integrate. We report on the techniques used, complications experienced, the management of those complications, and the overall mobility outcomes of seven patients who had femur distraction osteogenesis (DO) with a Freedom nail followed by PEPOL. Method: Retrospective evaluation of a prospectively maintained database identified nine patients (5 females) who had transfemoral DO in preparation for PEPOL with two years of follow-up after PEPOL. Six patients had traumatic causes of amputation, one had perinatal complications, one was performed to manage necrotizing fasciitis and one was performed as a result of osteosarcoma. Result: The average age at which DO commenced was 39.4±15.9 years, and seven patients had their amputation more than ten years prior (average 25.5±18.8 years). The residual femurs, on average, started at 102.2±39.7 mm and were lengthened 58.1±20.7 mm, 98±45% of the goal (99±161% of the original bone length). Five patients (56%) had a complication requiring additional surgery: four events of inadequate regeneration were managed with continued lengthening to the desired goal followed by autograft placement harvested from contralateral femur reaming; one patient had the cerclage wires break, which required operative replacement. All patients had osseointegration performed at 355±123 days after the initial lengthening nail surgery. One patient had K-level >2 before DO, at a mean of 3.4±0.6 (2.6-4.4) years following osseointegration. Six patients had K-level >2. The 6-Minute Walk Test remained unchanged (267±56 vs. 308 ± 117 meters). Patient self-rating of prosthesis function, problems, and amputee situation did not significantly change from before DO to after osseointegration. Six patients required additional surgery following osseointegration: six to remove fixation plates placed to maintain distraction osteogenesis length at osseointegration; two required irritation and debridement for infection. Conclusion: Extremely short residual femurs, which make TSP use troublesome, can be lengthened with externally controlled telescoping nails and successfully achieve osseointegration. However, it is imperative to counsel patients that additional surgery to address inadequate regeneration or to remove painful hardware used to maintain fixation may be necessary. This may improve the amputee’s expectations before beginning a potentially arduous process.

Keywords: osseointegration, limb lengthening, quality of life, amputation

Procedia PDF Downloads 69
742 Axillary Evaluation with Targeted Axillary Dissection Using Ultrasound-Visible Clips after Neoadjuvant Chemotherapy for Patients with Node-Positive Breast Cancer

Authors: Naomi Sakamoto, Eisuke Fukuma, Mika Nashimoto, Yoshitomo Koshida

Abstract:

Background: Selective localization of the metastatic lymph node with clip and removal of clipped nodes with sentinel lymph node (SLN), known as targeted axillary dissection (TAD), reduced false-negative rates (FNR) of SLN biopsy (SLNB) after neoadjuvant chemotherapy (NAC). For the patients who achieved nodal pathologic complete response (pCR), accurate staging of axilla by TAD lead to omit axillary lymph node dissection (ALND), decreasing postoperative arm morbidity without a negative effect on overall survival. This study aimed to investigate the ultrasound (US) identification rate and success removal rate of two kinds of ultrasound-visible clips placed in metastatic lymph nodes during TAD procedure. Methods: This prospective study was conducted using patients with clinically T1-3, N1, 2, M0 breast cancer undergoing NAC followed by surgery. A US-visible clip was placed in the suspicious lymph node under US guidance before neoadjuvant chemotherapy. Before surgery, US examination was performed to evaluate the detection rate of clipped node. During the surgery, the clipped node was removed using several localization techniques, including hook-wire localization, dye-injection, or fluorescence technique, followed by a dual-technique SLNB and resection of palpable nodes if present. For the fluorescence technique, after injection of 0.1-0.2 mL of indocyanine green dye (ICG) into the clipped node, ICG fluorescent imaging was performed using the Photodynamic Eye infrared camera (Hamamatsu Photonics k. k., Shizuoka, Japan). For the dye injection method, 0.1-0.2 mL of pyoktanin blue dye was injected into the clipped node. Results: A total of 29 patients were enrolled. Hydromark™ breast biopsy site markers (Hydromark, T3 shape; Devicor Medical Japan, Tokyo, Japan) was used in 15patients, whereas a UltraCor™ Twirl™ breast marker (Twirl; C.R. Bard, Inc, NJ, USA) was placed in 14 patients. US identified the clipped node marked with the UltraCore Twirl in 100% (14/14) and with the Hydromark in 93.3% (14/15, p = ns). Success removal of clipped node marked with the UltraCore Twirl was achieved in 100% (14/14), whereas the node marked with the Hydromark was removed in 80% (12/15) (p = ns). Conclusions: The ultrasound identification rate differed between the two types of ultrasound-visible clips, which also affected the success removal rate of clipped nodes. Labelling the positive node with a US-highly-visible clip allowed successful TAD.

Keywords: breast cancer, neoadjuvant chemotherapy, targeted axillary dissection, breast tissue marker, clip

Procedia PDF Downloads 66
741 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 384
740 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator

Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase

Abstract:

In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.

Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging

Procedia PDF Downloads 177
739 Gray’s Anatomy for Students: First South Asia Edition Highlights

Authors: Raveendranath Veeramani, Sunil Jonathan Holla, Parkash Chand, Sunil Chumber

Abstract:

Gray’s Anatomy for Students has been a well-appreciated book among undergraduate students of anatomy in Asia. However, the current curricular requirements of anatomy require a more focused and organized approach. The editors of the first South Asia edition of Gray’s Anatomy for Students hereby highlight the modifications and importance of this edition. There is an emphasis on active learning by making the clinical relevance of anatomy explicit. Learning anatomy in context has been fostered by the association between anatomists and clinicians in keeping with the emerging integrated curriculum of the 21st century. The language has been simplified to aid students who have studied in the vernacular. The original illustrations have been retained, and few illustrations have been added. There are more figure numbers mentioned in the text to encourage students to refer to the illustrations while learning. The text has been made more student-friendly by adding generalizations, classifications and summaries. There are useful review materials at the beginning of the chapters which include digital resources for self-study. There are updates on imaging techniques to encourage students to appreciate the importance of essential knowledge of the relevant anatomy to interpret images, due emphasis has been laid on dissection. Additional importance has been given to the cranial nerves, by describing their relevant details with several additional illustrations and flowcharts. This new edition includes innovative features such as set inductions, outlines for subchapters and flowcharts to facilitate learning. Set inductions are mostly clinical scenarios to create interest in the need to study anatomy for healthcare professions. The outlines are a modern multimodal facilitating approach towards various topics to empower students to explore content and direct their learning and include learning objectives and material for review. The components of the outline encourage the student to be aware of the need to create solutions to clinical problems. The outlines help students direct their learning to recall facts, demonstrate and analyze relationships, use reason to explain concepts, appreciate the significance of structures and their relationships and apply anatomical knowledge. The 'structures to be identified in a dissection' are given as Level I, II and III which represent the 'must know, desirable to know and nice to know' content respectively. The flowcharts have been added to get an overview of the course of a structure, recapitulate important details about structures, and as an aid to recall. There has been a great effort to balance the need to have content that would enable students to understand concepts as well as get the basic material for the current condensed curriculum.

Keywords: Grays anatomy, South Asia, human anatomy, students anatomy

Procedia PDF Downloads 201
738 Composing Method of Decision-Making Function for Construction Management Using Active 4D/5D/6D Objects

Authors: Hyeon-Seung Kim, Sang-Mi Park, Sun-Ju Han, Leen-Seok Kang

Abstract:

As BIM (Building Information Modeling) application continually expands, the visual simulation techniques used for facility design and construction process information are becoming increasingly advanced and diverse. For building structures, BIM application is design - oriented to utilize 3D objects for conflict management, whereas for civil engineering structures, the usability of nD object - oriented construction stage simulation is important in construction management. Simulations of 5D and 6D objects, for which cost and resources are linked along with process simulation in 4D objects, are commonly used, but they do not provide a decision - making function for process management problems that occur on site because they mostly focus on the visual representation of current status for process information. In this study, an nD CAD system is constructed that facilitates an optimized schedule simulation that minimizes process conflict, a construction duration reduction simulation according to execution progress status, optimized process plan simulation according to project cost change by year, and optimized resource simulation for field resource mobilization capability. Through this system, the usability of conventional simple simulation objects is expanded to the usability of active simulation objects with which decision - making is possible. Furthermore, to close the gap between field process situations and planned 4D process objects, a technique is developed to facilitate a comparative simulation through the coordinated synchronization of an actual video object acquired by an on - site web camera and VR concept 4D object. This synchronization and simulation technique can also be applied to smartphone video objects captured in the field in order to increase the usability of the 4D object. Because yearly project costs change frequently for civil engineering construction, an annual process plan should be recomposed appropriately according to project cost decreases/increases compared with the plan. In the 5D CAD system provided in this study, an active 5D object utilization concept is introduced to perform a simulation in an optimized process planning state by finding a process optimized for the changed project cost without changing the construction duration through a technique such as genetic algorithm. Furthermore, in resource management, an active 6D object utilization function is introduced that can analyze and simulate an optimized process plan within a possible scope of moving resources by considering those resources that can be moved under a given field condition, instead of using a simple resource change simulation by schedule. The introduction of an active BIM function is expected to increase the field utilization of conventional nD objects.

Keywords: 4D, 5D, 6D, active BIM

Procedia PDF Downloads 276
737 The Dynamics of Planktonic Crustacean Populations in an Open Access Lagoon, Bordered by Heavy Industry, Southwest, Nigeria

Authors: E. O. Clarke, O. J. Aderinola, O. A. Adeboyejo, M. A. Anetekhai

Abstract:

Aims: The study is aimed at establishing the influence of some physical and chemical parameters on the abundance, distribution pattern and seasonal variations of the planktonic crustacean populations. Place and Duration of Study: A premier investigation into the dynamics of planktonic crustacean populations in Ologe lagoon was carried out from January 2011 to December 2012. Study Design: The study covered identification, temporal abundance, spatial distribution and diversity of the planktonic crustacea. Methodology: Standard techniques were used to collect samples from eleven stations covering five proximal satellite towns (Idoluwo, Oto, Ibiye, Obele, and Gbanko) bordering the lagoon. Data obtained were statistically analyzed using linear regression and hierarchical clustering. Results:Thirteen (13) planktonic crustacean populations were identified. Total percentage abundance was highest for Bosmina species (20%) and lowest for Polyphemus species (0.8%). The Pearson’s correlation coefficient (“r” values) between total planktonic crustacean population and some physical and chemical parameters showed that positive correlations having low level of significance occurred with salinity (r = 0.042) (sig = 0.184) and with surface water dissolved oxygen (r = 0.299) (sig = 0.155). Linear regression plots indicated that, the total population of planktonic crustacea were mainly influenced and only increased with an increase in value of surface water temperature (Rsq = 0.791) and conductivity (Rsq = 0.589). The total population of planktonic crustacea had a near neutral (zero correlation) with the surface water dissolved oxygen and thus, does not significantly change with the level of the surface water dissolved oxygen. The correlations were positive with NO3-N (midstream) at Ibiye (Rsq =0.022) and (downstream) Gbanko (Rsq =0.013), PO4-P at Ibiye (Rsq =0.258), K at Idoluwo (Rsq =0.295) and SO4-S at Oto (Rsq = 0.094) and Gbanko (Rsq = 0.457). The Berger-Parker Dominance Index (BPDI) showed that the most dominant species was Bosmina species (BPDI = 1.000), followed by Calanus species (BPDI = 1.254). Clusters by squared Euclidan distances using average linkage between groups showed proximities, transcending the borders of genera. Conclusion: The results revealed that planktonic crustacean population in Ologe lagoon undergo seasonal perturbations, were highly influenced by nutrient, metal and organic matter inputs from river Owoh, Agbara industrial estate and surrounding farmlands and were patchy in spatial distribution.

Keywords: diversity, dominance, perturbations, richness, crustacea, lagoon

Procedia PDF Downloads 721
736 Conservation of Ibis Statue Made of Composite Materials Dating to 3RD Intermediate Period - Late Period

Authors: Badawi Mahmoud, Eid Mohamed, Salih Hytham, Tahoun Mamdouh

Abstract:

Cultural properties made of types of materials; we can classify them broadly into three categories. There are organic cultural properties which have their origin in the animal and plant kingdoms. There are the inorganic cultural properties made of metal or stone. Then there are those made of both organic and inorganic materials such as metal with wood. Most cultural properties are made from several materials rather than from one single material. Cultural properties reveal a lot of information about the past and often have great artistic value. It is important to extend the life of cultural properties and preserve themif possible, that is intended to preserve them for future generations. The study of metallic relics usually includes examining the techniques used to make them and the extent to which they have corroded. The conservation science of archaeological artifacts demands an accurate grasp of the interior of the article, which cannot be seen. This is essential to elucidate the method of manufacture and provides information that is important for cleaning, restoration, and other processes of conservation. Conservation treatment does not ensure the prevention of further degradation of the archaeological artifact. Instead, it is an attempt to inhibit further degradation as much as possible. Ancient metallic artifacts are made of many materials. Some are made of a single metal, such as iron, copper, or bronze. There are also composite relics made of several metals. Almost all metals (except gold) corrode while they rest underground. Corrosion is caused by the interaction of oxygen, water, and various ions. Chloride ions play a major role in the advance of corrosion. Excavated metallic relics are usually scientifically examined as to their structure and materials and treated for preservation before being displayed for exhibition or stored in a storehouse. Bird statue hermit body is made of wood and legs and beak bronze, the object broken separated to three parts. This statue came to Grand Egyptian Museum – Conservation Centre (GEM-CC) Inorganic Lab. Statuette representing the god djehoty shaped of the bird (ibis) sculpture made of bronze and wood the body of statues made from wood and bronze from head and leg and founded remains of black resin maybe it found with mummy, the base installed by wooden statue of the ancient writings there dating, the archaeological unit decided the dating is 3rd intermediate period - late period. This study aims to do conservation process for this statue, attempt to inhibit further degradation as much as possible and fill fractures and cracks in the wooden part.

Keywords: inorganic materials, metal, wood, corrosion, ibis

Procedia PDF Downloads 254
735 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems

Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia

Abstract:

The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.

Keywords: cloud computing, data management, multi-tenancy, requirements, security

Procedia PDF Downloads 156
734 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 165
733 Multicollinearity and MRA in Sustainability: Application of the Raise Regression

Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez

Abstract:

Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.

Keywords: multicollinearity, MRA, interaction, raise

Procedia PDF Downloads 104
732 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach

Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo

Abstract:

In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.

Keywords: economic efficiency, frontier production and revenue functions, Nimba County, Liberia, output-oriented approach, revenue efficiency, sustainable agriculture

Procedia PDF Downloads 127
731 Gait Analysis in Total Knee Arthroplasty

Authors: Neeraj Vij, Christian Leber, Kenneth Schmidt

Abstract:

Introduction: Total knee arthroplasty is a common procedure. It is well known that the biomechanics of the knee do not fully return to their normal state. Motion analysis has been used to study the biomechanics of the knee after total knee arthroplasty. The purpose of this scoping review is to summarize the current use of gait analysis in total knee arthroplasty and to identify the preoperative motion analysis parameters for which a systematic review aimed at determining the reliability and validity may be warranted. Materials and Methods: This IRB-exempt scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist strictly. Five search engines were searched for a total of 279 articles. Articles underwent a title and abstract screening process followed by full-text screening. Included articles were placed in the following sections: the role of gait analysis as a research tool for operative decisions, other research applications for motion analysis in total knee arthroplasty, gait analysis as a tool in predicting radiologic outcomes, gait analysis as a tool in predicting clinical outcomes. Results: Eleven articles studied gait analysis as a research tool in studying operative decisions. Motion analysis is currently used to study surgical approaches, surgical techniques, and implant choice. Five articles studied other research applications for motion analysis in total knee arthroplasty. Other research applications for motion analysis currently include studying the role of the unicompartmental knee arthroplasty and novel physical therapy protocols aimed at optimizing post-operative care. Two articles studied motion analysis as a tool for predicting radiographic outcomes. Preoperative gait analysis has identified parameters than can predict postoperative tibial component migration. 15 articles studied motion analysis in conjunction with clinical scores. Conclusions: There is a broad range of applications within the research domain of total knee arthroplasty. The potential application is likely larger. However, the current literature is limited by vague definitions of ‘gait analysis’ or ‘motion analysis’ and a limited number of articles with preoperative and postoperative functional and clinical measures. Knee adduction moment, knee adduction impulse, total knee range of motion, varus angle, cadence, stride length, and velocity have the potential for integration into composite clinical scores. A systematic review aimed at determining the validity, reliability, sensitivities, and specificities of these variables is warranted.

Keywords: motion analysis, joint replacement, patient-reported outcomes, knee surgery

Procedia PDF Downloads 94
730 The Coexistence of Creativity and Information in Convergence Journalism: Pakistan's Evolving Media Landscape

Authors: Misha Mirza

Abstract:

In recent years, the definition of journalism in Pakistan has changed, so has the mindset of people and their approach towards a news story. For the audience, news has become more interesting than a drama or a film. This research thus provides an insight into Pakistan’s evolving media landscape. It tries not only to bring forth the outcomes of cross-platform cooperation among print and broadcast journalism but also gives an insight into the interactive data visualization techniques being used. The storytelling in journalism in Pakistan has evolved from depicting merely the truth to tweaking, fabricating and producing docu-dramas. It aims to look into how news is translated to a visual. Pakistan acquires a diverse cultural heritage and by engaging audience through media, this history translates into the storytelling platform today. The paper explains how journalists are thriving in a converging media environment and provides an analysis of the narratives in television talk shows today.’ Jack of all, master of none’ is being challenged by the journalists today. One has to be a quality information gatherer and an effective storyteller at the same time. Are journalists really looking more into what sells rather than what matters? Express Tribune is a very popular news platform among the youth. Not only is their newspaper more attractive than the competitors but also their style of narrative and interactive web stories lead to well-rounded news. Interviews are used as the basic methodology to get an insight into how data visualization is compassed. The quest for finding out the difference between visualization of information versus the visualization of knowledge has led the author to delve into the work of David McCandless in his book ‘Knowledge is beautiful’. Journalism in Pakistan has evolved from information to combining knowledge, infotainment and comedy. What is being criticized the most by the society most often becomes the breaking news. Circulation in today’s world is carried out in cultural and social networks. In recent times, we have come across many examples where people have gained overnight popularity by releasing songs with substandard lyrics or senseless videos perhaps because creativity has taken over information. This paper thus discusses the various platforms of convergence journalism from Pakistan’s perspective. The study concludes with proving how Pakistani pop culture Truck art is coexisting with all the platforms in convergent journalism. The changing media landscape thus challenges the basic rules of journalism. The slapstick humor and ‘jhatka’ in Pakistani talk shows has evolved from the Pakistani truck art poetry. Mobile journalism has taken over all the other mediums of journalism; however, the Pakistani culture coexists with the converging landscape.

Keywords: convergence journalism in Pakistan, data visualization, interactive narrative in Pakistani news, mobile journalism, Pakistan's truck art culture

Procedia PDF Downloads 284
729 An Informative Marketing Platform: Methodology and Architecture

Authors: Martina Marinelli, Samanta Vellante, Francesco Pilotti, Daniele Di Valerio, Gaetanino Paolone

Abstract:

Any development in web marketing technology requires changes in information engineering to identify instruments and techniques suitable for the production of software applications for informative marketing. Moreover, for large web solutions, designing an interface that enables human interactions is a complex process that must bridge between informative marketing requirements and the developed solution. A user-friendly interface in web marketing applications is crucial for a successful business. The paper introduces mkInfo - a software platform that implements informative marketing. Informative marketing is a new interpretation of marketing which places the information at the center of every marketing action. The creative team includes software engineering researchers who have recently authored an article on automatic code generation. The authors have created the mkInfo software platform to generate informative marketing web applications. For each web application, it is possible to automatically implement an opt in page, a landing page, a sales page, and a thank you page: one only needs to insert the content. mkInfo implements an autoresponder to send mail according to a predetermined schedule. The mkInfo platform also includes e-commerce for a product or service. The stakeholder can access any opt-in page and get basic information about a product or service. If he wants to know more, he will need to provide an e-mail address to access a landing page that will generate an e-mail sequence. It will provide him with complete information about the product or the service. From this point on, the stakeholder becomes a user and is now able to purchase the product or related services through the mkInfo platform. This paper suggests a possible definition for Informative Marketing, illustrates its basic principles, and finally details the mkInfo platform that implements it. This paper also offers some Informative Marketing models, which are implemented in the mkInfo platform. Informative marketing can be applied to products or services. It is necessary to realize a web application for each product or service. The mkInfo platform enables the product or the service producer to send information concerning a specific product or service to all stakeholders. In conclusion, the technical contributions of this paper are: a different interpretation of marketing based on information; a modular architecture for web applications, particularly for one with standard features such as information storage, exchange, and delivery; multiple models to implement informative marketing; a software platform enabling the implementation of such models in a web application. Future research aims to enable stakeholders to provide information about a product or a service so that the information gathered about a product or a service includes both the producer’s and the stakeholders' point of view. The purpose is to create an all-inclusive management system of the knowledge regarding a specific product or service: a system that includes everything about the product or service and is able to address even unexpected questions.

Keywords: informative marketing, opt in page, software platform, web application

Procedia PDF Downloads 127
728 Luminescent Dye-Doped Polymer Nanofibers Produced by Electrospinning Technique

Authors: Monica Enculescu, A. Evanghelidis, I. Enculescu

Abstract:

Among the numerous methods for obtaining polymer nanofibers, the electrospinning technique distinguishes itself due to the more growing interest induced by its proved utility leading to developing and improving of the method and the appearance of novel materials. In particular, production of polymeric nanofibers in which different dopants are introduced was intensively studied in the last years because of the increased interest for the obtaining of functional electrospun nanofibers. Electrospinning is a facile method of obtaining polymer nanofibers with diameters from tens of nanometers to micrometrical sizes that are cheap, flexible, scalable, functional and biocompatible. Besides the multiple applications in medicine, polymeric nanofibers obtained by electrospinning permit manipulation of light at nanometric dimensions when doped with organic dyes or different nanoparticles. It is a simple technique that uses an electrical field to draw fine polymer nanofibers from solutions and does not require complicated devices or high temperatures. Different morphologies of the electrospun nanofibers can be obtained for the same polymeric host when different parameters of the electrospinning process are used. Consequently, we can obtain tuneable optical properties of the electrospun nanofibers (e.g. changing the wavelength of the emission peak) by varying the parameters of the fabrication method. We focus on obtaining doped polymer nanofibers with enhanced optical properties using the electrospinning technique. The aim of the paper is to produce dye-doped polymer nanofibers’ mats incorporating uniformly dispersed dyes. Transmission and fluorescence of the fibers will be evaluated by spectroscopy methods. The morphological properties of the electrospun dye-doped polymer fibers will be evaluated using scanning electron microscopy (SEM). We will tailor the luminescent properties of the material by doping the polymer (polyvinylpyrrolidone or polymethylmetacrilate) with different dyes (coumarins, rhodamines and sulforhodamines). The tailoring will be made taking into consideration the possibility of changing the luminescent properties of electrospun polymeric nanofibers that are doped with different dyes by using different parameters for the electrospinning technique (electric voltage, distance between electrodes, flow rate of the solution, etc.). Furthermore, we can evaluated the influence of the concentration of the dyes on the emissive properties of dye-doped polymer nanofibers using different concentrations. The advantages offered by the electrospinning technique when producing polymeric fibers are given by the simplicity of the method, the tunability of the morphology allowed by the possibility of controlling all the process parameters (temperature, viscosity of polymeric solution, applied voltage, distance between electrodes, etc.), and by the absence of necessity of using harsh and supplementary chemicals such as the ones used in the traditional nanofabrication techniques. Acknowledgments: The authors acknowledge the financial support received through IFA CEA Project No. C5-08/2016.

Keywords: electrospinning, luminescence, polymer nanofibers, scanning electron microscopy

Procedia PDF Downloads 213
727 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors

Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami

Abstract:

Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.

Keywords: fault diagnosis, fault location, integrated sensors, PV modules

Procedia PDF Downloads 224